Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 11/2775 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Consolidate Data in Private Clouds, But Consider Security and Regulatory Issues

    - by Troy Kitch
    The January 13 webcast Security and Compliance for Private Cloud Consolidation will provide attendees with an overview of private cloud computing based on Oracle's Maximum Availability Architecture and how security and regulatory compliance affects implementations. Many organizations are taking advantage of Oracle's Maximum Availability Architecture to drive down the cost of IT by deploying private cloud computing environments that can support downtime and utilization spikes without idle redundancy. With two-thirds of sensitive and regulated data in organizations' databases private cloud database consolidation means organizations must be more concerned than ever about protecting their information and addressing new regulatory challenges. Join us for this webcast to learn about greater risks and increased threats to private cloud data and how Oracle Database Security Solutions can assist in securely consolidating data and meet compliance requirements. Register Now.

    Read the article

  • Validating Data Using Data Annotation Attributes in ASP.NET MVC

    - by bipinjoshi
    The data entered by the end user in various form fields must be validated before it is saved in the database. Developers often use validation HTML helpers provided by ASP.NET MVC to perform the input validations. Additionally, you can also use data annotation attributes from the System.ComponentModel.DataAnnotations namespace to perform validations at the model level. Data annotation attributes are attached to the properties of the model class and enforce some validation criteria. They are capable of performing validation on the server side as well as on the client side. This article discusses the basics of using these attributes in an ASP.NET MVC application.http://www.bipinjoshi.net/articles/0a53f05f-b58c-47b1-a544-f032f5cfca58.aspx       

    Read the article

  • What kind of programs/solutions can only be written with OOP or are too hard to achieve without it?

    - by user1598390
    Paraphrasing a recent question: What is Object Oriented Programming ill-suited for? I would like to ask the opposite question: What kind of programs cannot be written unless you use OOP? What kind of programs are not recommended to be written using non-OOP techniques? What kind of programs need OOP in order to even be written? What kind of programs would be too hard to write without OOP ? The answer to this question can help sell the idea of OOP to project leaders that have no special interest in code quality. At least they could buy the idea if one shows them the kind of things that are not even possible unless you use OOP.

    Read the article

  • On which crowdsourced design site have you the best experience? (ie, crowdspring, mycroburst, etc)

    - by Darryl Hein
    I wasn't sure which site to ask this on (as Graphic Design hasn't reached beta yet), so I thought I would try here. I'm looking to have a couple logos and website designs done. I've had some great local designers, but each one has moved or gone else where so I keep having to look for new designers. My thought and realization in the last couple days is to go to a crowdsourced design site like crowdspring.com or mycroburst.com. Both of these sites look good, but I'm wondering what else is out there? Are there better ones and how have your experiences been them?

    Read the article

  • More Value From Data Using Data Mining Presentation

    Here is a presentation I gave at the SQLBits conference in September which was recorded by Microsoft.  Usually I speak about SSIS but on this particular event I thought people would like to hear something different from me. Microsoft are making a big play for making Data Mining more accessible to everyone and not just boffins.  In this presentation I give an overview of data mining and then do some demonstrations using the excellent Excel Add-Ins available from Microsoft SQL Server 2008 SQL Server 2005 I hope you enjoy this presentation http://go.microsoft.com/?linkid=9633764

    Read the article

  • Move data from others user accounts in my user account

    - by user118136
    I had problems with compiz setting and I make multiple accounts, now I want to transfer my information from all deleted users in my current account, some data I can not copy because I am not right to read, I type in terminal "sudo nautilus" and I get the permission for read, but the copied data is available only for superusers and I must charge the permissions for each file and each folder. How I can copy the information with out the superuser rights OR how I can charge the permissions for selected folder and all files and folders included in it?

    Read the article

  • Is there a Design Pattern for preventing dangling references?

    - by iFreilicht
    I was thinking about a design for custom handles. The thought is to prevent clients from copying around large objects. Now a regular handle class would probably suffice for that, but it doesn't solve the "dangling reference problem"; If a client has multiple handles of the same object and deletes the object via one of them, all the others would be invalid, but not know it, so the client could write or read parts of the memory he shouldn't have access to. Is there a design pattern to prevent this from happening? Two ideas: An observer-like pattern where the destructor of an object would notify all handles. "Handle handles" (does such a thing even exist?). All the handles don't really point to the object, but to another handle. When the object gets destroyed, this "master-handle" invalidates itself and therefore all that point to it.

    Read the article

  • Is It "Wrong"/Bad Design To Put A Thread/Background Worker In A Class?

    - by Jetti
    I have a class that will read from Excel (C# and .Net 4) and in that class I have a background worker that will load the data from Excel while the UI can remain responsive. My question is as follows: Is it bad design to have a background worker in a class? Should I create my class without it and use a background worker to operate on that class? I can't see any issues really of creating my class this way but then again I am a newbie so I figured I would make sure before I continue on. I hope that this question is relevant here as I don't think it should be on stackoverflow as my code works, this just a design issue.

    Read the article

  • Getters and Setters are bad OO design?

    - by Dan
    Getters and Setters are bad Briefly reading over the above article I find that getters and setters are bad OO design and should be avoided as they go against Encapsulation and Data Hiding. As this is the case how can it be avoided when creating objects and how can one model objects to take this into account. In cases where a getter or setter is required what other alternatives can be used? Thanks.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Master Data Services Employees Sample Model

    - by Davide Mauri
    I’ve been playing with Master Data Services quite a lot in those last days and I’m also monitoring the web for all available resources on it. Today I’ve found this freshly released sample available on MSDN Code Gallery: SQL Server Master Data Services Employee Sample Model http://code.msdn.microsoft.com/SSMDSEmployeeSample This sample shows how Recursive Hierarchies can be modeled in order to represent a typical organizational chart scenario where a self-relationship exists on the Employee entity. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Looking for Cutting-Edge Data Integration: 2010 Innovation Awards

    - by dain.hansen
    This year's Oracle Fusion Middleware Innovation Awards will honor customers and partners who are creatively using to various products across Oracle Fusion Middleware. Brand new to this year's awards is a category for Data Integration. Think you have something unique and innovative with one of our Oracle Data Integration products? We'd love to hear from you! Please submit today The deadline for the nomination is 5 p.m. PT Friday, August 6th 2010, and winning organizations will be notified by late August 2010. What you win! FREE pass to Oracle OpenWorld 2010 in San Francisco for select winners in each category. Honored by Oracle executives at awards ceremony held during Oracle OpenWorld 2010 in San Francisco. Oracle Middleware Innovation Award Winner Plaque 1-3 meetings with Oracle Executives during Oracle OpenWorld 2010 Feature article placement in Oracle Magazine and placement in Oracle Press Release Customer snapshot and video testimonial opportunity, to be hosted on oracle.com Podcast interview opportunity with Senior Oracle Executive

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Teacher demands excessive/unjustified use of Design Patterns

    - by SoboLAN
    I study computer science and I have a class called "Programming Techniques". Its purpose is to teach (us) good object oriented design principles. During the semester we have homeworks, programs that we must write to demonstrate what we've learned. The lab assistant demands for each of these homeworks that specific design patterns should be used. For example, the current homework is an application used for processing customer orders. We are demanded to use either "Factory Method" or "Abstract Factory" design patterns for this. It gets even worse: at the end of the semester we must write a program (something more complex) that must use at least one creational pattern, at least one structural pattern and at least one behavioural pattern. Is it normal to demand this ? I mean, forcing us to design our programs in such a way that a specific design pattern makes sense is just beyond what I consider ok. If I'm a car mechanic and have a huge tool box, then I will use a certain tool from that box if and when the situation demands it. Not more, not less. If my design of the application doesn't demand at all the use of "Abstract Factory" (for example), then why should I implement it ? I'm not sure yet if the senior lecturer agrees with what the lab assistant is demanding, but I want to talk to him about it and I need solid arguments to do so. How should I approach this problem with him ? PS: I'm sure there must be a better way to teach us these things. Maybe making us each week read about 3 design patterns and the next week giving us a test with small but specific programming or architectural situations/problems. The goal in that test would be to identify what design patterns would make sense and how they could be implemented. This way, he can see if we understand them. EDIT: These homeworks are not just 100-line programs, they have quite a lot of requirements and are fairly complicated. This is the reason we have about 2 - 3 weeks of deadline for each of them. I agree that practicing this is the best way to learn. But shouldn't smaller programs/applications be used for this ? Something just for demonstrating purposes. Not big programs with lots of requirements/classes/etc.

    Read the article

  • Getting a design company to embrace the benefits of good development

    - by Toby
    I know there are already various topics discussing what we can do to get managers to buy into good development practices, but I was wondering if there are any specific things we can do to explain to designers that Web Development is more than just turning their design into a website. I want to try and push them to design based on progressive enhancement, responsive design and ajax but I think there is a trend to stick to the print based design principles, which is understandable as it is their background, but is frustrating to a dev.

    Read the article

  • Books, resources and so on about GUI architecture [on hold]

    - by Moses
    I'm making first steps in GUI programming. Earlier I've had little experience with GUI and I remember that it was kind of pain. Code was either coupled or to verbose with tons of "Listeners". It seems to me that problem in me and not in a library that I used(Swing). So, could you recommend me some books, tutorials or resources where I can find how to design gui programms? Emphasize that I'm interested in architecture and not in how to use components of some framework(which about 90% of tutorials that I've ever seen).

    Read the article

  • OOD: All classes at bottom of hierarchy contain the same field

    - by My Head Hurts
    I am creating a class diagram for what I thought was a fairly simple problem. However, when I get to the bottom of the hierarchy, all of the classes only contain one field and it is the same one. This to me looks very wrong, but this field does not belong in any of the parent classes. I was wondering if there are any suggested design patterns in a situation like this? A simplified version of the class diagram can be found below. Note, fields named differently cannot belong to any other class +------------------+ | ObjectA | |------------------| | String one | | String two | | | +---------+--------+ | +---------------+----------------+ | | +--------|--------+ +--------|--------+ | ObjectAA | | ObjectAB | |-----------------| |-----------------| | String three | | String four | | | | | +--------+--------+ +--------+--------+ | | | | +--------|--------+ +--------|--------+ | ObjectAAA | | ObjectABA | |-----------------| |-----------------| | String five | | String five | | | | | +-----------------+ +-----------------+ ASCII tables drawn using http://www.asciiflow.com/

    Read the article

  • How do you deal with design in Scrum?

    - by Seth
    How do you deal with design in Scrum? Do you still have well written design documents for each scrum iteration? Do you just do design notes featuring UML diagrams? Or do you just have well commented code? Each iteration may involve changing design so I just wanted to know how people capture this so new developers have an easy job of understanding the domain and getting on board as rapidly as possible.

    Read the article

  • How do you deal with design in Scrum?

    - by Seth
    How do you deal with design in Scrum? Do you still have well written design documents for each scrum iteration? Do you just do design notes featuring UML diagrams? Or do you just have well commented code? Each iteration may involve changing design so I just wanted to know how people capture this so new developers have an easy job of understanding the domain and getting on board as rapidly as possible.

    Read the article

  • Why avoid Java Inheritance "Extends"

    - by newbie
    Good day! Jame Gosling said “You should avoid implementation inheritance whenever possible.” and instead, use interface inheritance. But why? How can we avoid inheriting the structure of an object using the keyword "extends", and at the same time make our code Object Oriented? Could someone please give an Object Oriented example illustrating this concept in a scenario like "ordering a book in a bookstore?" Thank you in advance.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >