Search Results

Search found 60142 results on 2406 pages for 'data formatting'.

Page 7/2406 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Xbox 360 formatting for MP4 h.264?

    - by Kayle
    For the life of me, I can't get the Xbox 360 to recognize anything other than a WMV. It's full updated with Xbox Live and I even added all the Video apps (Zune, Netflix, etc) just to see if it would force some sort of codec update. I'm using handbrake, at the moment, to try and convert an mkv with subtitles. Handbrake burns the subtitles into the video for me, which is important. I can't figure out what is missing though, as the Xbox refuses to acknowledge my MP4s. Here's the VLC codec info output: As far as I can tell, this is EXACTLY what the xbox supports. I don't want to convert every video twice by throwing it through Windows Movie Maker just to get it to WMV. None of the converters mentions above output to WMV and I don't like WMV since it's not as universal as MP4. I've tried changing the container name manually to M4V, MOV, and AVI to see if I can trick the Xbox... no beans. The video displays perfectly in WMP, of course. If I convert it to WMV, suddenly it works fine. But I don't find this acceptable, as I know the Xbox is supposed to support other filetypes. If anyone knows why my Xbox won't play anything but WMVs, I would be very appreciative to know why! It's driving me nuts. The only other information I can think to include is that it's about 3 years old and of the cheapest variety (Xbox Arcade). I've added a hard drive, xbox live, and have thoroughly updated it. Maybe there's a specific video update I'm unaware of??

    Read the article

  • How to improve the HTML formatting in Evolution mail client

    - by Tom
    I have a question about viewing HTML emails in the Evolution mail client. Basically, I am receiving some emails that look lovely in Thunderbird but not in Evolution because the HTML rendering of Evolution isn't as advanced. Does anyone know how to improve the HTML rendering of Evolution? e.g. a plugin, tip, code patch, etc... The closest I've got is to right-click the email, "Save As...", save as a html file, then open in Firefox. Not exactly streamline! What emails can't it display well? We use the subversion revision control system which is set up to send an email whenever someone commits via svnnotify all nicely coloured via the --handler HTML::ColorDiff -d parameter. When Evolution fails to use the colours, I find it very had to read the raw diff.

    Read the article

  • Excel 2007 Conditional Formatting is not properly using custom formula provided

    - by Charles
    In Excel 2007, I want to conditionally color a row if it is odd numbered and then vary the coloring depending on if a specific cell (in column E) in that row contains a number (green) or empty(red). E.g. if E15 has a value of 2 and E13 has no entry, I would expect row 15 to be green and row 13 to be red. My two formulas are: To color red: =IF((MOD(ROW(),2) = 1),NOT(ISNUMBER(INDIRECT("$E$"&ROW()))), FALSE) To color green: =IF((MOD(ROW(),2) = 1),ISNUMBER(INDIRECT("E"&ROW())), FALSE) If I paste these formulas into cells on the worksheet I get the expected values. For row 15 the "red" equation is false and the "green" equation is true. For Row 13 the "red" equation is true and the "green equation is false. However if I use these formulas in the conditional formating use formula feature, all of my rows are red, any thoughts?

    Read the article

  • How to apply formatting to particular font type in MS word 2007

    - by Mirage
    I have word document with 100 pages. Now in that Doc i have different language font in one line and then English translation in other line. Now i want to format the language font only e.g i want to change the color that font. How can do that without changing color of English font. I don't want to do manually by selecting each line and apply setting

    Read the article

  • Conditional Formatting of a Cell Based on Color Value in that Cell in Excel

    - by Otaku
    Is is possible to format a cell based on one component of the RGB value inside of it. For example, let's say I have 3 cells - A1, A2, A3. In A1, I want the cell color to be the shade of red anywhere between 0 and 255 of the number in the cell (so if it is 128, it should be like half red). For that given cell, the other values will just be 0, 0 (so in this case, zero green and zero blue). Is this possible?

    Read the article

  • formatting before md device creation in RAID5

    - by kumar
    consider you are creating a raid5 device with three drives. mdadm --create /dev/md0 --leve=5 --raid-disk=3 /dev/sda1 /dev/sdb1 /dev/sdc1 After issuing this command , I can see the progress of md device creating using cat /proc/mdstat. During the progress ITSELF, can I create a file ssytem partition say ext2 on md0 device like: mkfs.ext2 /dev/md0. Actually I am able to create this and want to confirm whether doing this before 100% completion of md device creation is CORRECT?

    Read the article

  • Conditional formatting

    - by djerry
    I have a rather annoying format I need to implement. There are 2 cells, both containing a date (cell A and B). B contains a date (24-06-2011). Cell B should be colored if cell A is between a date range, based on cell B. So if cell A lies between cell B - 7 days and cell B - 11 days, then it should be triggered. So with numbers: If B is 24-06-2011 then the range (which is not in any cell in the spreadsheet) is 13-06-2011 until 17-06-2011. If the date in cell A (let's say 14-06-2011) is in that range, cell B should be colored. Any ideas?

    Read the article

  • Merging and re-formatting paragraphs in Microsoft Word 2007

    - by thkala
    After a copy/paste mishap in Microsoft Word 2007, I ended up with text looking like this: This line breaks up here continues here, and so on here, when it should all be in a single line without all the random whitespace. I confirmed that there are paragraph separators and extra whitespace between each line - probably due to hard-coded newlines in the original source. Is there a (preferrably easy) way to merge paragraphs in Microsoft Word? Is there a way to re-format a paragraph so that extraneous whitespace is removed? I can change the flush style, but the whitespace remains. I (obviously?) do not have any experience with Word, being more of a TeX person, but I have been searching Google and crawling the menus for a few hours and I have yet to find a solution...

    Read the article

  • QuickBooks 2008 template formatting

    - by rotard
    I am trying to create an invoice template, and need to display the first line of the description in bold, with the rest of the description UNDER it, and NOT bold. Is there any way to do this, or do I need to look for a third party solution?

    Read the article

  • Formatting PHP Code within Vim

    - by gacrux
    I'm currently using Vim as a lightweight IDE. I have NERDTree, bufexplorer, supertab, and ctags plugins which do almost everything I want. Only big thing missing for me is auto code formatting. I'm working with some messy PHP code which has inconsistent indenting and code formatting, ideally I could highlight the code I want formatted (whole files would be fine too) and run a command to tidy it. Does anybody have a nice solution for this working in Vim?

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • Data recovery on a data HDD (no OS)

    - by aCuria
    I am helping a family member with a dead hard disk. It is a seagate 200Gb 3.5" HDD in one of those old-school external enclosures. The problem was that windows failed to detect the hard disk when plugged in through USB. I removed the hard disk from its enclosure, and plugged it into my desktop PC. The BIOS does detect it upon POST, but unfortunately windows 7 would refuse to boot. It will get stuck on the loading screen with the glowing windows logo. Safe mode doesn't help either. What options do I have before going for some professional data recovery? edit: Someone modified the Title to something completely different from what I was asking, i just changed it back. 1) 2 HDD drives, DiskA(Dead), DiskB(my OS disk) 2) when B is connected to my system, everything works fine 3) when A AND B is connected, failure to boot. POSTs fine, but windows wont load 4) A has NO OS, its PURE data. It came from an EXTERNAL HDD enclosure which doesnt belong to me, and im trying to do data recovery.

    Read the article

  • Data Governance 2010 Conference in San Diego

    - by Tony Ouk
    The Data Governance Annual Conference is one of the world's most authoritative and vendor neutral event on Data Governance and Data Quality.  The conference will focus on the "how-tos" from starting a data governance and stewardship program to attaining data governance maturity with specific topics on MDM.  This year's event will be hosted June 7 through June 10 in San Diego, California. For more information, including registration details, visit the Data Governance 2010 Conference website.

    Read the article

  • How to search for newline or linebreak characters in Excel?

    - by Highly Irregular
    I've imported some data into Excel (from a text file) and it contains some sort of newline characters. It looks like this initially: If I hit F2 (to edit) then Enter (to save changes) on each of the cells with a newline (without actually editing anything), Excel automatically changes the layout to look like this: I don't want these newlines characters here, as it messes up data processing further down the track. How can I do a search for these to detect more of them? The usual search function doesn't accept an enter character as a search character.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • Sabre Manages Fast Data Growth with Oracle Data Integration Products

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Last year at OpenWorld we announced Sabre Holding as a winner of the Fusion Middleware Innovation Awards. The Sabre team did an excellent job at leveraging cutting edge technologies for managing rapid data growth and exponential scalability demands they have experienced in the travel industry. Today we announced the details and specific benefits of Sabre’s new real-time data integration solution in a press release. Please take a look if you haven’t seen it yet. Sabre Holdings Deploys Oracle Data Integrator and Oracle GoldenGate to Support Rapid Customer Growth There are 3 different areas of benefits Sabre achieved by using Oracle Data Integration products: Manages 7X increase in data sources for the enterprise data warehouse Reduced infrastructure complexity Decreased time to market for new products and services by 30 percent. This simply shows that using latest technologies helps the companies to innovate robust solutions against today’s key data management challenges. And the benefit of using a next generation data integration technology is not only seen in the IT operations, but also in the business side. A better data integration solution for the enterprise data warehouse delivered the platform they need to accelerate how they service their customers, improving their competitive advantage. Tomorrow I will give another great example of innovation with next generation data integration from Oracle. We will be discussing the Fusion Middleware Innovation Awards 2012 winners and their results with using Oracle’s data integration products.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >