Search Results

Search found 5237 results on 210 pages for 'lightweight processes'.

Page 127/210 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • Just Another Web Service (JAWS) vs SOA

    Over the last few years SOA has been a hot topic lending it to be abused by many that have no understanding of the concept. In my opinion, one of the largest issues facing SOA is the lack of understanding and experience implementing SOA by business and IT alike. I just recently deployed a new web services that is called by multiple service clients. Would you call this SOA because it is a web service that can be called by any requesting client? In my opinion, this is not SOA; instead it is Just Another Web Service (JAWS).  Just because a company creates a web service does not mean that they are using SOA, in fact it only means that they are using a web service. SOA is an architectural style that focuses on the design of systems based on the consumer and providers thorough the use of contracts.  With this approach SOA needs to be applied for the top down in order for it to reach its full potential. In the case of the web service, the service is just a small part of the entire system that is reusable and has the flexibility to change. In order for a company in this case to move towards SOA then they need to define business processes that can be shared through the use of reusable software and loose coupling. Once the company’s thought and development process change to address changes in this manner they can start to become more SOA.

    Read the article

  • Updated Batch Best Practices

    - by ACShorten
    The Batch Best Practices whitepaper has been updated and published to My Oracle Support with the latest advice and new facilities. Two of the more interesting updates are: Addition of a Bache Cache Flush - Just like the online, the batch component of the framework caches data. It is now possible to refresh the cache manually using a new batch jobs F1-FLUSH. This is particularly useful if you execute long running threadpool workers across many different batch processes (submitters). New EXTENDED execution mode - A new specialist mode has been introduced for sites that use a large number of submitters (concurrent threads) and are experiencing intermittent communication issues in the threadpoolworker. This mode uses Oracle Coherences Extend mode to allow submitters to be allocated to threadpoolworkers via proxy connections. It differs from CLUSTERED mode in that a submitter can be explicitly allocated to a specific threadpoolworker via a proxy connection. This mode is only used for specific situation and customers should continue to use CLUSTERED mode gemerally. The whitepaper outlines advice for these new facilities and provides advice for existing functionality. The whitepaper is available from My Oracle Support at Doc Id: 836362.1.

    Read the article

  • CodePlex Daily Summary for Thursday, July 05, 2012

    CodePlex Daily Summary for Thursday, July 05, 2012Popular ReleasesTaskScheduler ASP.NET: Release 2 - 1.1.0.0: Release 2 - Version 1.1.0.0 In this version the following features were added to the library: Event fired on all tasks end The ASP.NET project takes a example of management of scheduled tasks.Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.?????????? - ????????: All-In-One Code Framework ??? 2012-07-04: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????10????Microsoft OneCode Sample,????4?Windows Base Sample,2?XML Sample?4?ASP.NET Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Windows Base Sample CSCheckOSBitness VBCheckOSBitness CSCheckOSVersion VBCheckOSVersion XML Sample CSXPath VBXPath ASP.NET Sample CSASPNETDataPager VBASPNET...sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.0: The first release of sheetengine. See sheetengine.codeplex.com for a list of features and examples.AssaultCube Reloaded: 2.5.1 Intrepid Fixed: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) If you use the default maprot or any maprot, you need to fix it Well, 2.5 was...xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...NETDeob0: NETDeob 0.2.0: - Big structural changes - Safer signature identification - More accurate signatures - Minor bugs fixed - Minor compability issues fixedMVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.1802.65): Add support for OSDP authenticationCommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.0: User Right Licensing ContentType version 2.0.267.1New Projects$ME: a new kind of javascript library.NET Micro Framework Driver Library: .NET Micro Framework Driver Library This is just a library of classes that I have created that can be used by Micro Framework Devices. aishe: ?????????BoogieTools: The goal of this project is to provide editors and additional tools to improve working with the Microsoft Boogie language and tools.BWAPI-CLI: .NET wrapper for the Broodwar API (BWAPI) and Broodwar Terrain Analyzer (BWTA) written in C++/CLICoffee Survey Framework: Coffee Survey Framework is an extensible, XML-based ASP.NET 4.0 framework for building and maintaining tabular survey pages. Dauphine SmartControls for K2 blackpearl: SmartControls for K2 blackpearl simplifies the integration of K2 blackpearl processes with ASP.NET web forms. It is a collection of ASP.NET web controls with both design-time and run-time capabilities for code-less integration to K2 blackpearl processes. The underlying framework of SmartControls for K2 blackpearl can also be used to extend existing ASP.NET web controls with the same K2 blackpearl integration capabilities that it offers.Easy Full-Text Search Queries: Very lightweight class to convert user-friendly search queries into Microsoft SQL Server Full-Text Search queries. Gracefully handles errors in input query.EnvironmentCheck: A Windows Forms application which will read from an XML config a number of checks which need to be performed to verify that a server meets pre-reqs.Lindeberg edge detector: Just a simple program I wrote for 'Image processing fundamentals' course MesanAnsatte: Prosjekt for utforskning av .net. Multi-Touch Scrum tool: This is a application use to support Scrum software development, based on Microsoft Surface SDK 2.0.MyWCFService: This project is a simple explanation for WCF service, that can help you understand how the WCF works!!race4fun engine: Open source driving simulator. Modable engine for easy use. SchoolManagerMVC: School Management System, written in MVC3 with unity framework, DI unity and unit testsSharePoint Managed Metadata Claims Provider: Custom Claims Provider implementation for SharePoint. Claims Playground.SQL data access: .NET library for accessing a Microsoft SQL Server database.sqlscriptmover: Exports stored procedures, function, views, tables and triggers to individual files or imports same and attempts to create in designated database.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • IoT? Time for Enterprise Architecture

    - by OTN ArchBeat
    Of course you've been listening to the latest OTN ArchBeat Podcast on the challenges and opportunities in the Internet of Things. If so, you'll also be interested in ZDNet blogger Joe McKendricks' recent post, Will the 'Internet of Things' make CIOs' jobs harder?. In that post McKendrick offers this important bit of advice that will certainly have architects saying "I told you so." Enterprises need to develop architectural approaches to the management of data. Meaning the development of repeatable processes to source, ingest, transform and store information. For years, IT managers simply bought more hardware and addressed data with on-off integration projects. Now it's time for enterprise architecture. IoT is an important new phase in the evolution of enterprise IT. Challenging? You bet! But meeting any such challenge requires big, broad thinking and planning. In that context Enterprise Architecture has always been important. But as IoT gains traction and speed, enterprise architecture should be top of mind for all concerned.

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • Let's introduce the Oracle Enterprise Data Quality family!

    - by Sarah Zanchetti
    The Oracle Enterprise Data Quality family of products helps you to achieve maximum value from their business applications by delivering fit-­for-­purpose data. OEDQ is a state-of-the-art collaborative data quality profiling, analysis, parsing, standardization, matching and merging product, designed to help you understand, improve, protect and govern the quality of the information your business uses, all from a single integrated environment. Oracle Enterprise Data Quality products are: Oracle Enterprise Data Quality Profile and Audit Oracle Enterprise Data Quality Parsing and Standardization Oracle Enterprise Data Quality Match and Merge Oracle Enterprise Data Quality Address Verification Server Oracle Enterprise Data Quality Product Data Parsing and Standardization Oracle Enterprise Data Quality Product Data Match and Merge Also, the following are some of the key features of OEDQ: Integrated data profiling, auditing, cleansing and matching Browser-based client access Ability to handle all types of data – for example customer, product, asset, financial, operational Connection to any JDBC-compliant data sources and targets Multi-user project support (role-based access, issue tracking, process annotation, and version control) Services Oriented Architecture (SOA) - support for designing processes that may be exposed to external applications as a service Designed to process large data volumes A single repository to hold data along with gathered statistics and project tracking information, with shared access Intuitive graphical user interface designed to help you solve real-world information quality issues quickly Easy, data-led creation and extension of validation and transformation rules Fully extensible architecture allowing the insertion of any required custom processing  If you need to learn more about EDQ, or get assistance for any kind of issue, the Oracle Technology Network offers a huge range of resources on Oracle software. Discuss technical problems and solutions on the Discussion Forums. Get hands-on step-by-step tutorials with Oracle By Example. Download Sample Code. Get the latest news and information on any Oracle product. You can also get further help and information with Oracle software from: My Oracle Support Oracle Support Services An Information Center is available, where you can find technical information and fast solutions to the most common already solved issues: Information Center: Oracle Enterprise Data Quality [ID 1555073.2]

    Read the article

  • How I use RegExp in my Java program? [migrated]

    - by MIH1406
    I have the following string examples: 00001 1 12 123 00002 3 7 321 00003 99 23 332 00004 192 50 912 In a separate text file. Numbers are separated by tabs not spaces. I tried to read the file and print each line if it matches a given RegExp, but I could not find the suitable RegExp for these lines. private static void readFile() { String fileName = "processes.lst"; FileReader file = null; String result = ""; try { file = new FileReader(fileName); BufferedReader reader = new BufferedReader(file); String line = null; String regEx = "[0-9]\t[0-9]\t[0-9]\t[0-9]"; while((line = reader.readLine()) != null) { if(line.matches(regEx)) { result += "\n" + line; } } } catch(Exception e) { System.out.println(e.getMessage()); } finally { if(file != null) try { file.close(); } catch(Exception e) { System.out.println(e.getMessage()); } } System.out.println(result); } I ended up without any string being printed!!

    Read the article

  • Hill International Wins Oracle Eco-Enterprise Innovation Award

    - by Evelyn Neumayr
    In my last blog entry, I discussed Oracle’s Eco-Enterprise Innovation Award, part of the Oracle Excellence awards. Nominations for this year’s awards are due July 17. These awards are presented to organizations that use Oracle products to reduce their environmental footprint while improving their operational efficiency. One of last year’s winners was Hill International. Engineering News-Record magazine recently ranked Hill as the eighth-largest construction management firm in the United States. Hill International was able to streamline its forecasting and improve its visibility into its construction projects’ productivity and profitability using Oracle Primavera. They also implemented Oracle Hyperion Financial Management to standardize its financial reporting and forecasting processes and support its decision-making. With Oracle, Hill gained visibility into the true productivity of each project and cut its financial reporting cycle time from two weeks to one. The company also used the data generated to support new construction project proposals and determine the profitability of potential projects. Hill International realized significant cost savings and reduced its environmental impact on its US$400 million Comcast Center construction project in Philadelphia by centralizing its data storage, reducing paper usage, and maximizing project efficiency. It also leveraged the increased visibility offered by the Oracle solutions to make more environmentally-sound business decisions regarding on-site demolition, re-use of previous structures, green design of new facilities, procurement, and materials usage. See more about Hill International and the other Eco-Enterprise Innovation award winners here.  

    Read the article

  • Scalable solution for website polling

    - by Tom Irving
    I'm looking to add push notifications to one of my iOS apps. The app is a client for a website which doesn't offer push notifications. What I've come up with so far: App sends a message to home server when transitioning to background, asking the server to start polling the website for the logged in user. The home server starts a new process to poll for that user. Polling happens every so many seconds / minutes. When the user returns to the iOS app, the app sends a message to the home server to stop polling. The home server kills the process polling for the user. Repeat. The problem is that this soon becomes stupid: 100s of users means 100s of different processes. It's just not scalable in the slighest. What I've written so far is in PHP, using CURL to do the polling and I started with PHP a few days ago, so maybe I'm missing something obvious that could help me with this. Some advice would be great.

    Read the article

  • Quick Script for Adding Skype Groups

    - by Robert May
    So, I needed to add about 30 people to several different Skype groups today, and I didn’t want to repeat the /add [skypename] thing over and over and over.  Building the list was a pain . . . I couldn’t find a good way to extract all of the users in an existing group.  There’s probably an api or something, but I just did that part by hand. Adding them to the groups was pretty easy with Windows Scripting Host.  Basically, I just ran this: <package>    <job id="vbs">       <script language="VBScript">          set WshShell = WScript.CreateObject("WScript.Shell")          WshShell.AppActivate 4484          WScript.Sleep 100          WshShell.SendKeys "/add user1~"          WScript.Sleep 100 …          WshShell.SendKeys "/add usern~"          WScript.Sleep 100       </script>    </job> </package> Add as many users as you need by copying the sendkeys and sleep lines.  Then, save the script to a .wsf file.  The AppActivate line needs to be changed to have the process id of skype instead of the number there.  To get that, open up Task Manager, click on Processes, then find skype.exe and find it’s PID. Before you double click on the file in windows explorer, you’ll need to have created the groups in skype.  For each group, open the group, and click in the chat window of the group.  Then double click on the WSF file.  If you don’t click in the chat window, you will likely get the add user dialog box instead of just adding the users. Technorati Tags: Skype,Script

    Read the article

  • Prognostications for the Future of BI

    - by jacqueline.coolidge(at)oracle.com
    Dashboard Insight has published the viewpoints on the future of BI from several vendors' perspectives including ours at Business Intelligence Predictions for 2011 We offered: In 2011, businesses will demand more from BI.  With intense competitive and economic pressures, it's not enough to be interesting.  BI must be actionable and enable people to respond smarter and faster to the opportunities and challenges of the day.  Most companies rely on BI to help them understand what's going on in their business.  Many are ready to make the leap from "What's going on?" to "What are we going to do about it?" Seamless integration from reporting to what-if analysis and scenario modeling helps businesses decide the right course of action.  The integration of BI with SOA and BPEL will deliver the true payoff for BI by enabling companies to initiate business processes directly from their analysis, turning insight to action for more agile and competitive business.  And, I must admit, it's tough to argue with the trends identified by other vendors. Enabling true self-service and engaging a larger community of users Accelerating the adoption of BI on mobile devices Embracing more advanced analytics such as data/text mining and location intelligence Price/performance breakthroughs It's singing to the choir.  I look forward to hearing the voices of some customers who are pushing the envelope and will post those stories as I capture them.  

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Opportunities in Development in our Swedish office

    - by anca.rosu
    Hi everyone, my name is Henrik and I joined the JRockit group in 2004. Before that my background was Microsoft, as both a Test Competence lead and as a Program Manager. As an Engineering Manager at Oracle I lead a team of 11 developers. I focus on people management and the daily operations of the department with a heavy focus on interaction and dependencies between the groups and departments here at the Stockholm development site. I also make sure my team deliver on our commitments. I would like to give you a brief summary of the Oracle JRockit team: -The development group in Stockholm delivers several products for the Oracle Fusion Middleware stack. Our main products are JRockitVE which allows you to run a Java Virtual Machine without an operating system, the JRockit Java Virtual Machine which is the default jvm for all Oracle middleware products, and the JRockit MissionControl, a set of tools that allows developers to monitor their applications at runtime and perform advanced latency analysis as well as in-production memory leak detection etc. -The office has several departments focusing on different aspects of the product development process, not only to build features and test them but everything from building the infrastructure needed to automatically build and test the products to sustaining engineering that tracks down bugs in customer systems and provide them with patches. Some inspirational lines around what the Oracle JRockit group can offer you in terms of progress, development and learning: - It is a unique chance to get insight and experience building enterprise class software for one of the worlds largest software companies. Here there are almost unlimited possibilities for the right candidate to learn about silicon features and how to implement support for this in software, and to compile optimizations. The position will also give insight into the processes needed to produce software at this level in the industry. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Development,Sweden,Jrockit,Java,Virtual Machine,Oracle Fusion Middleware,software

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • FY11 plans &ndash; how can you increase your SOA business?

    - by Jürgen Kress
    Thanks for a fantastic FY10 was great to work with all of you! Yes with the economic crises the fiscal year was hard. SOA and Oracle Fusion Middleware do address this challenges and can help companies to save cost to integrate their systems, automate and change their processes. More when we publish our fiscal year results. What is on the agenda for FY11? Specialization: It is key that you become SOA & Application Grid Specialized. We will focus our activities and budgets on partners with Specialization! Sales campaigns: To support you in our joint business we will continue to run joint sales campaigns. With OFM 11g there is a great opportunity to generate service revenue to migrate and to consolidate on the platform. It is key that you do register your opportunities within the Open Market Model (OMM) to ensure sales alignment. Enablement. With the release of many new products and versions training is key. We will continue to offer training dedicated to your role: sales, pre-sales and implementation. Make sure that you check local partner training calendars and sign up for the next bootcamps Thanks for your support! Jürgen Kress

    Read the article

  • bluetooth headset can connect, but not visible in pulse audio

    - by Kim Marivoet
    I have a plantronics bluetooth headset, and until yesterday I could use it without any problem. However, today it suddenly stopped working (maybe related to the last software update I did). I can still connect/disconnect my headset, but it doesn't show up in pulse audio anymore. I read through various posts that describes kind of the same problem, but none of the suggested solutions worked. I get following error in the syslog: Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/HFPAG Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSource Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSink Oct 13 16:50:09 desktop kernel: [ 17.340943] input: 48:C1:AC:08:FE:8F as /devices/virtual/input/input14 Oct 13 16:50:09 desktop bluetoothd[1040]: /org/bluez/1040/hci0/dev_48_C1_AC_08_FE_8F/fd0: fd(36) ready Oct 13 16:50:09 desktop rtkit-daemon[1894]: Successfully made thread 2213 of process 1892 (n/a) owned by '1000' RT at priority 5. Oct 13 16:50:09 desktop rtkit-daemon[1894]: Supervising 5 threads of 1 processes of 1 users. Oct 13 16:50:10 desktop bluetoothd[1040]: Badly formated or unrecognized command: AT+XEVENT=USER-AGENT,COM.PLANTRONICS,PLT_VOYAGERPRO,0109,27.90,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Oct 13 16:50:10 desktop bluetoothd[1040]: Audio connection got disconnected Any help would be much appreciated. I'm using Ubuntu 12.04. Thanks, Kim

    Read the article

  • New Content: Customer Engagement & Oracle OpenWorld Preview

    - by user462779
    Two new bits of content available on Profit Online: In A Cross-Channel Approach to Consumer Engagement, Cassandra Moren, senior director of consumer goods industry marketing at Oracle, shares her thoughts on how consumer goods manufacturers are reaping benefits from developing a direct relationship with customers: "Consumer goods manufacturers are starting to adapt in ways that mirror retailers. They are making investments in innovative technologies and processes to build the infrastructure to support the market demand. With advances in aspects like social networking, digital marketing and mobility fundamentally changing the way consumers behave, the door has opened to building a more direct relationship with their customers." We've also published a Special Report on Oracle OpenWorld that gives a great overview of recommendations for must-see sessions and insider advice from experienced attendees. For example, this top from John Matelski, newly elected president of the Independent Oracle Users Group: “Based on developments of the last 12 months, I think big data is definitely going to be hot. The challenges and opportunities of data governance will be another biggie. And there will obviously be a big emphasis on Oracle Exadata and the other Oracle Engineered Systems, with more than 100 sessions.” More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • Why do my gvfs mounts not show up under ~/.gvfs?

    - by kynan
    From what I read, when mounting a network share via nautilus or gvfs-mount the mount point should be in ~/.gvfs. This seems not to be the case for me: I tried mounting both an FTP and SMB share via both nautilus and gvfs-mount under both Ubuntu Maverick and Natty and in none of the cases did I see any mount point under ~/.gvfs. I can access the shares just find in nautilus, but I want to have access via the command line, which is why I need a mount point in the file system. Edit: Debugging following James Henstridge's answer and enzotib's comment revealed that on my laptop gvfs-fuse-daemon is running and consequently gvfs mounts show up in ~/.gvfs, whereas on the 2 workstations where ~/.gvfs remained empty gvfs-fuse-daemon was not running. On all 3 machines there are other gvfs processes running: gvfsd, gvfs-afc-volume-monitor, ... On the laptop, mount | fgrep gvfs yields gvfs-fuse-daemon on /home/xxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxx) That raises the questions: How are shares mounted without gvfs-fuse-daemon running? Is there no mount point created in that case and is every access to the share a gvfs library call? Which daemon is responsible? gvfsd? What's the role of gvfs-fuse-daemon? Does it only create a fuse mount point in ~/.gvfs?

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • Dealing with the customer / developer culture mismatch on an agile project

    - by Eric Smith
    One of the tenets of agile is ... Customer collaboration over contract negotiation ... another one is ... Individuals and interactions over processes and tools But the way I see it, at least when it comes to interaction with the customer, there is a fundamental problem: How the customer thinks is fundamentally different to how a software engineer thinks That may be a bit of a generalisation, yes. Arguably, there are business domains where this is not necessarily true---these are few and far between though. In many domains though, the typical customer is: Interested in daily operational concerns--short-range tactics ... not strategy; Only concerned with the immediate solution; Generally one-dimensional, non-abstract thinkers; Primarily interested in "getting the job done" as opposed to coming up with a lasting, quality solution. On the other hand, software engineers who practice agile are: Professionals who value quality; Individuals who understand the notion of "more haste less speed" i.e., spending a little more time to do things properly will save lots of time down the road; Generally, very experienced analytical thinkers. So very clearly, there is a natural culture discrepancy that tends to inhibit "customer collaboration". What's the best way to address this?

    Read the article

  • Oracle@info360: Advance Beyond Point Solutions To An Enterprise Content Strategy

    - by kellsey.ruppel(at)oracle.com
    The info360/AIIM conference is March 22-24 in Washington DC. We have a number of customer speakers this year talking on the theme of “Advance Beyond Point Solutions To An Enterprise Content Strategy.” These customers all started by addressing a particular use case, but then used the infrastructure they had created to quickly and cost effectively stand up solutions to new business problems.  Andy MacMillan, VP of Product Management at Oracle, will give a thought provoking opening keynote at 8:50 AM on Tuesday, March 22nd. He will be joined by Juan Jose Goldschtein, the CIO of the Organization of American States. The OAS has developed a human rights website that is the front end to a case management system for human rights violations. The implementation supports digital signatures on iPads, so their executives can approve workflows and keep cases moving forward while they are busy traveling and investigating abuses.Other customer speakers include:Tom Robinette, Director of Applications and IT Engineering, Dresser-RandRobin Crisp, Program Manager, FDAMonica Crocker, Corporate Records Manager, Land O’ LakesBrian Skapura, The American Institute of ArchitectsKathy Adams and Leslie Becker, The Nature ConservancyIrfan Motiwala, Sr. VP, Moody’s Investment ServicesMolly Wenzler, Director of Electronic Media, MeadWestvaco Other sessions include our Super Session that kicks off the Oracle Track @info360 on Wednesday. At 11:00 AM, Senior Director of Product Marketing, Howard Beader will present The Social Enterprise – Combining People, Processes and Content. This session will focus on how customers have brought social media, business process management, and content management together to supercharge their organizations. Oracle customers can arrange one-on-one meetings with Oracle executives and product experts, and attend the VIP customer appreciation event. Oracle will be joined by Oracle partners:FujitsuKesteTeamInformaticsKapowSena SystemsDTIYou can learn more about discounts for Oracle customers and register on our Oracle@info360 page.To see more about the customers and sessions that will be presented, you can look at the Oracle Track page on the AIIM/info360 website.Technorati Tags: oracle, AIIM, info360, content management, social enterprise

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >