Search Results

Search found 60107 results on 2405 pages for 'data oriented'.

Page 24/2405 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How can I move towards the Business Intelligence/ data mining fields from software developer [closed]

    - by user1758043
    I am working as a Python developer and I work with django. I also do some web scraping and building spiders and bots. Now from there I want to make my move to Business Intelligence. I just want to know how I can move into that field. Because as companies are not going to hire me in that field directly, I just want to know how can I make the transistion. I was thinking of first working as Database developer in SQL and then I can see further. But I want advice from you guys so that I can start learning that stuff so that I can change jobs keeping that in mind. Here in my area there are plenty of jobs in all areas but I need to know how to transition and what things I should learn before making that transition. Here jobs are plenty so if I know my stuff, getting a job is a piece of cake because they don't have any people. Same jobs keep getting advertised for months and months.

    Read the article

  • map data structure in pacman

    - by Sam Fisher
    i am trying to make a pacman game in c# using GDI+, i have done some basic work and i have previously replicated games like copter-it and minesweeper. but i am confused about how do i implement the map in pacman, i mean which datastructure to use, so i can use it for moving AI controlled objects and check collisions with walls. i thought of a 2d array of ints but that didnt make sense to me. looking for some help. thanks.

    Read the article

  • How to retrieve data from a corruped volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed(probably due to hardware error and in the end I was getting error like Unknown filesystem ..... grub> .. GRUB console before i could take some action) and i reinstalled the same version form USB stick. I had ubuntu installed in ext4 file system and I am also having the same filesystem in the same hard disk on different drive. When I try to access my previous filesystem, i get error Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume, I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • Wiped data, and duplicated folders into files.

    - by Kaustubh P
    Something weird happened today, and I dont know how. Within a folder, all folders have a file by the same name, with a colon appended to it. And all the files from the most inner-most directory in my home, have been dumped to ~, with a size of 0 bytes. I have not executed any scripts or anything. I was just checking out some easter eggs, namely the gegls from outer space and free the fish and was away from the computer and was logged because of the screensaver. I couldnt log-back in with my password, so I just reset the PC, and while booting, the PC went into a drive check. BUT, IIRC, i saw the duplicate "folder files" before I had logged out, so thats not the reason! All the files have a timestamp of 14 Jan. Also, the contents of my eclipse folder have been dumped into ~. Right down to the jars and ini files. HELP!

    Read the article

  • Create association between informations

    - by Andrea Girardi
    I deployed a project some days ago that allow to extract some medical articles using the results of a questionnaire completed by a user. For instance, if I reply on questionnaire I'm affected by Diabetes type 2 and I'm a smoker, my algorithm extracts all articles related to diabetes bubbling up all articles contains information about Diabetes type 2 and smoking. Basically we created a list of topic and, for every topic we define a kind of "guideline" that allows to extract and order informations for a user. I'm quite sure there are some better way to put on relationship two content but I was not able to find them on network. Could you suggest my a model, algorithm or paper to better understand this kind of problem and that helps me to find a faster, and more accurate way to extract information for an user?

    Read the article

  • recover data in linux removed with rm

    - by user3717896
    Today i deleted my home directory (in wrong action) with: sudo rm -rf * And when i use extundelete i get this message: root@ubuntu:~# sudo extundelete --restore-directory /home/hamed/ /dev/sda2WARNING: Extended attributes are not restored. Loading filesystem metadata ... 746 groups loaded. Loading journal descriptors ... 29931 descriptors loaded. Searching for recoverable inodes in directory /home/hamed/ ... 498 recoverable inodes found. Looking through the directory structure for deleted files ... 498 recoverable inodes still lost. No files were undeleted. why it can't recover? Anyone can help me to return my Desktop, Documents and etc? I have ubuntu 14.04.

    Read the article

  • Trying to recover deleted Ubuntu partition

    - by user110984
    I made a mistake in logging into my 200 GB Ubuntu partition. I could not access Grub after that. Using a live CD I then ran Boot_Repair and apparently deleted the partition, I guess because I ran it from my 70 GB Windows partition. I can send the results of boot_info before that and of Boot_Repair. Then I ran TestDisk, which apparently found only dev/sda/ -320GB / 298 / GiB - WDC - WD3200BEVT-22A23T0 (Was there any more I could have done with TestDisk? I looked at the TestDisk_Step_By_Step example and found no way forward given that no other partitions turned up) I have run gpart and found this: /sda1 - 15 GB /sda2 - system reserved /sda3 - 70.15 GB /sda4 - extended 212.84 unallocated - 209.10 /sda5 - unknown 3.74 . I have been told I can recover the partition using gparted's Rescue start end command, but I don't know what to enter for start and end. [--EDIT: TestDisk Deeper Search stated that "the following partitions can't be recovered" and listed a 220-GB Linux partition 6 times. Then it stated that "The current number of heads per cylinder is 255 but the correct value may be 128" and I could try to change it in the Geometry menu (because apparently these are overlapping partitions) So should I do that?--]

    Read the article

  • How to retrieve data from a corrupted volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed, probably due to hardware error (and in the end I was getting errors like Unknown filesystem ..... grub> .., and it went to the GRUB console before I could take any other action). I reinstalled the same version from a USB stick. I had Ubuntu installed with the ext4 file system and I also have the same filesystem in the same hard disk on a different drive. When I try to access my previous filesystem, I get errors: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume ; I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • Master Data Management for Product Data

    In this AppsCast, Hardeep Gulati, VP PLM and PIM Product Strategy discusses the benefits companies are getting from Product MDM, more details about Oracle Product Hub solution and the progress, and where we are going from here.

    Read the article

  • What is meant by a primitive data type?

    - by Appy
    My understanding of a primitive datatype is that It is a datatype provided by a language implicitly (Others are user defined classes) So different languages have different sets of datatypes which are considered primitive for that particular language. Is that right? And what is the difference between a "basic datatype" and "built-in datatype". Wikipedia says a primitive datatype is either of the two. PS - Why is "string" type considered as a primitive type in SNOBOL4 and not in Java ?

    Read the article

  • In Search Data Structure And Algorithm Project Title Based on Topic

    - by Salehin Suhaimi
    As the title says, my lecturer gave me a project that i needed to finish in 3 weeks before final semester exams. So i thought i will start now. The requirement is to "build a simple program that has GUI based on all the chapter that we've learned." But i got stuck on WHAT program should i build. Any idea a program that is related to this chapter i've learned? Any input will help. list, array list, linked list, vectors, stacks, Queues, ADT, Hashing, Binary Search Tree, AVL Tree, That's about all i can remember. Any idea where can i start looking?

    Read the article

  • Why avoid Java Inheritance "Extends"

    - by newbie
    Good day! Jame Gosling said “You should avoid implementation inheritance whenever possible.” and instead, use interface inheritance. But why? How can we avoid inheriting the structure of an object using the keyword "extends", and at the same time make our code Object Oriented? Could someone please give an Object Oriented example illustrating this concept in a scenario like "ordering a book in a bookstore?" Thank you in advance.

    Read the article

  • Am I sending large amounts of data sensibly?

    - by Sofus Albertsen
    I am about to design a video conversion service, that is scalable on the conversion side. The architecture is as follows: Webpage for video upload When done, a message gets sent out to one of several resizing servers The server locates the video, saves it on disk, and converts it to several formats and resolutions The resizing server uploads the output to a content server, and messages back that the conversion is done. Messaging is something I have covered, but right now I am transferring via FTP, and wonder if there is a better way? is there something faster, or more reliable? All the servers will be sitting in the same gigabit switch or neighboring switch, so fast transfer is expected.

    Read the article

  • Using replacement to get possible outcomes to then search through HUGE amount of data

    - by Samuel Cambridge
    I have a database table holding 40 million records (table A). Each record has a string a user can search for. I also have a table with a list of character replacements (table B) i.e. i = Y, I = 1 etc. I need to be able to take the string a user is searching for, iterate through each letter and create an array of every possible outcome (the users string, then each outcome with alternative letters used). I need to check for alternatives on both lower and uppercase letters in the word A search string can be no longer than 10 characters long. I'm using PHP and a MySQL database. Does anyone have any thoughts / articles / guidance on doing this in an efficient way?

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Pulling Data out of an object in Javascript

    - by PerryCS
    I am having a problem retreiving data out of an object passed back from PHP. I've tried many different ways to access this data and none work. In Firebug I see the following... (it looks nicer in Firebug) - I tried to make this look as close to Firebug as possible results Object { data="{"formName":"form3","formData":"data goes here"}", phpLiveDebug="<...s: 198.91.215.227"} data "{"formName":"form3","formData":"data goes here"}" phpLiveDebug "<...s: 198.91.215.227" I can access phpLiveDebug no problem, but the data portion is an object. I have tried the following... success: function(results) { //$("#formName").val(results.data.formName); //$("#formName").val(results.data[0].formName); //$("#formName").val(results.data[0]); //$("#formName").val(results.data[1]); //$("#formName").val(results.data[0]["formName"]); var tmp = results.data[formName]; alert("!" + tmp + "!"); $("#formName").val(tmp); $("#jqueryPHPDebug").val(results.phpLiveDebug); } This line works in the example above... $("#jqueryPHPDebug").val(results.phpLiveDebug); but... I can't figure out how to get at the data inside the results.data portion... as you can see above, I have been trying different things and more not even listed there. I was really hoping this line would work :) var tmp = results.data[formName]; But it doesn't. So, after many days of reading, tinkering, my solution was to re-write it to return data similar to the phpLiveDebug but then I thought... it's gotta be something simple I'm overlooking... Thank you for your time. Please try and explain why my logic (my horrible attempts at trying to figure out the proper method) above is wrong if you can?

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • Dynamic swappable Data Access Layer

    - by Andy
    I'm writing a data driven WPF client. The client will typically pull data from a WCF service, which queries a SQL db, but I'd like the option to pull the data directly from SQL or other arbitrary data sources. I've come up with this design and would like to hear your opinion on whether it is the best design. First, we have some data object we'd like to extract from SQL. // The Data Object with a single property public class Customer { private string m_Name = string.Empty; public string Name { get { return m_Name; } set { m_Name = value;} } } Then I plan on using an interface which all data access layers should implement. Suppose one could also use an abstract class. Thoughts? // The interface with a single method interface ICustomerFacade { List<Customer> GetAll(); } One can create a SQL implementation. // Sql Implementation public class SqlCustomrFacade : ICustomerFacade { public List<Customer> GetAll() { // Query SQL db and return something useful // ... return new List<Customer>(); } } We can also create a WCF implementation. The problem with WCF is is that it doesn't use the same data object. It creates its own local version, so we would have to copy the details over somehow. I suppose one could use reflection to copy the values of similar fields across. Thoughts? // Wcf Implementation public class WcfCustomrFacade : ICustomerFacade { public List<Customer> GetAll() { // Get date from the Wcf Service (not defined here) List<WcfService.Customer> wcfCustomers = wcfService.GetAllCustomers(); // The list we're going to return List<Customer> customers = new List<Customer>(); // This is horrible foreach(WcfService.Customer wcfCustomer in wcfCustomers) { Customer customer = new Customer(); customer.Name = wcfCustomer.Name; customers.Add(customer); } return customers; } } I also plan on using a factory to decide which facade to use. // Factory pattern public class FacadeFactory() { public static ICustomerFacade CreateCustomerFacade() { // Determine the facade to use if (ConfigurationManager.AppSettings["DAL"] == "Sql") return new SqlCustomrFacade(); else return new WcfCustomrFacade(); } } This is how the DAL would typically be used. // Test application public class MyApp { public static void Main() { ICustomerFacade cf = FacadeFactory.CreateCustomerFacade(); cf.GetAll(); } } I appreciate your thoughts and time.

    Read the article

  • Finding the normals of an oriented bounding box?

    - by Milo
    Here is my problem. I'm working on the physics for my 2D game. All objects are oriented bounding boxes (OBB) based on the separate axis theorem. In order to do collision resolution, I need to be able to get an object out out of the object it is penetrating. To do this I need to find the normal of the face(s) that the other OBB is touching. Example: The small red OBB is a car lets say, and the big OBB is a static building. I need to determine the unit vector that is the normal of the building edge(s) the car is penetrating to get the car out of there. Here are my questions: How do I determine which edges the car is penetrating. I know how to determine the normal of an edge, but how do I know if I need (-dy, dx) or (dy, -dx)? In the case I'm demonstrating the car is penetrating 2 edges, which edge(s) do I use to get it out? Answers or help with any or all of these is greatly appreciated. Thank you

    Read the article

  • Effective handling of variables in non-object oriented programming

    - by srnka
    What is the best method to use and share variables between functions in non object-oriented program languages? Let's say that I use 10 parameters from DB, ID and 9 other values linked to it. I need to work with all 10 parameters in many functions. I can do it next ways: 1. call functions only with using ID and in every function get the other parameters from DB. Advantage: local variables are clear visible, there is only one input parameter to function Disadvantage: it's slow and there are the same rows for getting parameters in every function, which makes function longer and not so clear 2. call functions with all 10 parameters Advantage: working with local variables, clear function code Disadvantage: many input parameters, what is not nice 3. getting parameters as global variables once and using them everywhere Advantage - clearer code, shorter functions, faster processing Disadvantage - global variables - loosing control of them, possibility of unwanted overwriting (Especially when some functions should change their values) Maybe there is some another way how to implement this and make program cleaner and more effective. Can you say which way is the best for solving this issue?

    Read the article

  • Impact of Service Oriented Architecture (SOA) on Business and IT Operations

    The impact of Service Oriented Architecture (SOA) on business and IT operations varies from company to company. I think more and more companies are starting to view SOA as just another technology that they can incorporate in an existing or new system. One of the driving factors in using SOA is the reduction in maintenance costs and decrease in the time needed to bring products to market. The reductions in costs, and reduced turnaround time can be directly converted in to increased profitability due to less expenditures that are needed in order to maintain or create new systems. My personal perspective on SOA is that it is great for what it is actually intended to do. SOA allows systems to be distributed across networks or even the world while ensuring enterprise processing consistency, data integrity and preventing code duplication. This being said a lot of preparation and work goes into properly designing and implementing an SOA especially if an enterprise wants to take full advantage of its benefits. Even though SOA has recently gotten a lot of hype about its benefits it does not a perfect fit for all situations. At the end of the day SOA is just another tool in my tool belt that I can pull from to create solutions that meet the business’s needs. Based on current industry trends SOA appears to be a very solid technology to use moving forward, especially as more and more companies shift towards cloud based computing. It is important to remember that SOA is one of many technologies that can be used in creating business solutions and I think more time will be spent in the future evaluating if SOA is the right technology for a solution once the initial hype of SOA has calmed down.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Gravity Sort : Is this possible programatically?

    - by Bragaadeesh
    Hi, I've been thinking recently on using the Object Oriented design in the sorting algorithm. However I was not able to find a proper way to even come closer in making this sorting algorithm that does the sorting in O(n) time. Ok, here is what I've been thinking for a week. I have a set of input data. I will assign a mass to each of the input data (assume input data a type of Mass). I will be placing all my input data in the space all at same distance from earth. And I will make them free fall. According to gravitational law, the heaviest one hits the ground first. And the order in which they hit will give me the sorted data. This is funny in some way, but underneath I feel that this should be possible using the OO that I have learnt till date Is it really possible to make a sorting technique that uses gravitational pull like scenario or am I stupid/crazy?

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >