Search Results

Search found 3465 results on 139 pages for 'david powers'.

Page 19/139 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • Customizing Google Sites look and feel

    - by David Parunakian
    I find the site layout and theming capabilities in Google Sites (found in the Manage Site screen) very limited; for instance, I do not seem to be able to place the horizontal navigation buttons directly to the right of the logo and to customize their style, as well as to use the standard trick of making a horizontally stretchable background image of a box with rounded corners by splitting it into three parts and replicating the middle one, etc. Am I missing something? Are there any advanced settings available? Thanks.

    Read the article

  • OWB – How to update OWB after Database Cloning

    - by David Allan
    One of the most commonly asked questions led to one of the most commonly accessed support documents (strange that) for OWB is the document describing how to update the OWB repository details after cloning the Oracle database. The document in the Oracle support site has id 434272.1, and is titled 'How To Update Warehouse Builder After A Database Cloning (Doc ID 434272.1)'. This post is really for me to remember the document id;-)

    Read the article

  • Best keyboard drawer for programmers?

    - by David Pfeffer
    I'm a programmer with pretty severe neck problems. I program with three monitors, and I've found that my desk's short depth causes me to have to rotate my neck too much to see the "wing" screens. I can't get a deeper desk due to space restrictions. I'm looking for a keyboard drawer that can be installed onto a desk. However, I like the height of the keyboard on the desk. I'd like a drawer that is extremely low-profile/slim. My keyboard is less than 1" tall, so it'll fit pretty much anywhere. My ideal product would slide out from under my desk and "pop up" so that its surface is even with the desk. Does anything slim and nice like this exist? I'll even consider replacing the desk if a desk exists with this built-in.

    Read the article

  • Oracle & OAUG PO SIG's Procurement Executive Workshop - Burlington, MA April 29th, 2011

    - by david.hope-ross(at)oracle.com
    OAUG PO SIG and Oracle invite you to a day of learning and networking with your Boston area procurement peers. This event is focused on facilitating discussion among procurement executives, promoting best practices from leading customers, and sharing the vision that is driving enhancements to E-Business Suite procurement. OAUG PO SIG members and Oracle will share practical advice that improves technology adoption and lowers risk. Topics of interest include supplier management, upgrades, cloud-based deployment, as well as spend classification and analytics. For more information and registration please visit http://www.oracle.com/us/dm/h2fy11/68745-nafm10012033mpp102-se-334896.html.

    Read the article

  • How to fix sudo: setreuid(ROOT_UID, user_uid): Operation not permitted error?

    - by David R.
    I am using LDAP authentication on my Ubuntu 11.10 server. I installed libpam-ldap, and configured things accordingly. It works great, except that I get this error every once in a while when I try to sudo: sudo: setreuid(ROOT_UID, user_uid): Operation not permitted I know I have sudoers set up correctly, since it works most of the time. It's not just my log in either, others have the same problem when I have it. When this error is occurring, I can't ssh in with my regular system user at all. When I sign in directly, I can't get any gnome-terminal to start. Once I restart the server, the problem goes away. 'Course, that's not a solution, if it was a prod server, I'd be in trouble. How do I fix this? Edit 3/1/12: I just figured out that if stop and start the nscd service, the problem goes away. service nscd stop service nscd start Not much of a solution since I have to be logged into the server directly, not via ssh.

    Read the article

  • xsltproc killed, out of memory

    - by David Parks
    I'm trying to split up a 13GB xml file into small ~50MB xml files with this XSLT style sheet. But this process kills xsltproc after I see it taking up over 1.7GB of memory (that's the total on the system). Is there any way to deal with huge XML files with xsltproc? Can I change my style sheet? Or should I use a different processor? Or am I just S.O.L.? <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:param name="block-size" select="75000"/> <xsl:template match="/"> <xsl:copy> <xsl:apply-templates select="mysqldump/database/table_data/row[position() mod $block-size = 1]" /> </xsl:copy> </xsl:template> <xsl:template match="row"> <exsl:document href="chunk-{position()}.xml"> <add> <xsl:for-each select=". | following-sibling::row[position() &lt; $block-size]" > <doc> <xsl:for-each select="field"> <field> <xsl:attribute name="name"><xsl:value-of select="./@name"/></xsl:attribute> <xsl:value-of select="."/> </field> <xsl:text>&#xa;</xsl:text> </xsl:for-each> </doc> </xsl:for-each> </add> </exsl:document> </xsl:template>

    Read the article

  • Ubuntu 12.04 - Workspace switcher is shown after switching to another workspace

    - by David Kuridža
    I've just upgraded to Ubuntu 12.04 on HP Envy 14 and found a rather interesting bug. When I switch between workspaces using keyboard shortcuts, for example moving down to Workspace 3 from Workspace 1, workspace switcher is shown (as seen on below screenshot). It stays there until I hit escape key. After debugging, the problem only happens when I change shortcut keys to Super + direction. With default Ctrl + Alt + direction, everything is OK. I've searched but haven't found this problem reported before, I'm also not sure this is called workspace switcher :) Can you please let me know what to search for, are there any logs which might have some related information?

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • Leveraging Social Networks for Retail

    - by David Dorf
    For retailers, social media is all about B2C2C. That is, Business to Consumer to Consumer, or more specifically, retailer to influencer to consumer. Traditional marketing targeted mass media, trying to expose the message to as many people as possible. While effective, this approach has never been very efficient, with high costs for relatively low penetration. Then it was thought that marketers should focus their efforts on a relative few super-influencers that would then sway the masses. History shows a few successes with this approach but lacked any consistency or predictability. After all, if super-influencers were easy to find, most campaigns would easily go viral. Alas, research shows that most wide-spread trends were the result of several fortunate events, including some luck. So do people exert influence over each other when it comes to purchase decisions? Of course they do, all the time. But that influence is usually limited to a small set of friends and specific specialization. For instance, although I have 165 friends on Facebook, I am only able to influence my close friends and family on PC purchases, and I have no sway at all for fashion purchases. People trust my knowledge on technology, but nobody asks my advice on shoes. How then should retailers leverage social networks in order to reinforce brand image and push promotions? Two obvious ways are Like and Share. Online advertisements or wall-postings receive more clicks when the viewer sees that friends have "liked" the posting. That's our modern-day version of word-of-mouth advertising. Statistics show that endorsements from friends make it more likely a person will engage. If my friends and I liked it, then I might also "share" (or "retweet" in the case of Twitter) it with other friends. In that case the retailer has paid for X showings of the advertisement, but sharing has pushed it to an additional Y people at no cost. And further, the implicit endorsement by the sharer makes it more likely the recipient will engage. So a good first step is to find people active in social networks that will Like and Share in order to exert influence. Its still tough to go viral, but doubling engagement is still a big step in the right direction. More complex social graph analysis would be a second step, but I'll leave that topic for another day. If you're interested in the academic side of social dynamics, I suggest reading Duncan Watts' work.

    Read the article

  • Remote install of Ubuntu Server

    - by David Walker
    Hi all, I have a machine located 500 miles away that's running Ubuntu 8.04. I figure it's just about time that I upgrade to the latest LTS. However, there's a software raid (md_raid) in there, and I'm afraid that just a dist-upgrade when I switch over the sources.list will end with catastrophic failure. Like a panic on boot that the raid'd disk can't be read, or something else. First, hoping that's not the case, however, if it ends up happening I'm wondering if there's a means of having someone drop in a Ubuntu 10.04 server install disk, and flip on ssh, and some means for me to hop on and re-run the installer remotely. Is this feasible? If so, what would one need to do aside run apt-get install ssh on the target machine? I do have friends who can be in front of the target machine to initiate the process, just not execute it out.

    Read the article

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • Can't reinstall VLC

    - by David matthews
    I use VLC a lot. And when 2.0 came out Ubuntu did not update to that version, the REPO had the older version even months later, So I added the daily repo: http://ppa.launchpad.net/videolan/stable-daily/ubuntu and that worked for a while, after a few months later I received a 'Distribution upgrade' and when I installed it, it removed VLC. when I tried to re-install it gave me a bunch of unmet dependency's, so I disabled the source, ran apt-get update, and tried to install the older VLC, that did not work either. I eventually found a web page, and it helped me get it working, and I was also able to get the 'Stable Daily' working too But last night, I got another 'distro upgrade' and it uninstalled VLC again. when I try to reinstall from daily I get: The following packages have unmet dependencies: vlc : Depends: fonts-freefont-ttf but it is not installable Depends: vlc-nox (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Recommends: vlc-plugin-pulse (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. and from the default source: vlc : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed vlc-plugin-pulse : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Any ideas? I am using ubuntu 12.04 64bit.

    Read the article

  • No Internet On 12.04.1

    - by David Ramos
    I have no internet on Ubuntu 12.04.1 so i'm on a desktop so I got to install a .exe program to run my wireless card for my desktop so I got internet access, But I can't seem to download wine or its source files. If I do I put on my usb and boot into Ubuntu but then when I try to open it I get the programs source files etc. Anyone who could help me Please do :| ( I can't install wine from software center since I have no connection online). Note : I can download my stuff on windows 8 boot so I can transfer files to my usb to run on Ubuntu.

    Read the article

  • Oracle WebCenter potencia los entornos colaborativos en las Aplicaciones de Oracle.

    - by david.gandara(at)oracle.com
    En este informe de la firma de analistas Forrester Research se explica el esfuerzo continuado por parte de Oracle en facilitar y mejorar las posibilidades para que sus distintas soluciones empresariales (ERP, CRM, SCM...) estén capacitadas para facilitar la colaboración entre los distintos usuarios del sistema, y poner a su disposición servicios Web 2.0 como Wiki, Discussions, Internet Messaging, VOIP...

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • Website speed issues

    - by Jose David Garcia Llanos
    I am developing a website however i have noticed speed issues, i am not sure whether is due to the location of the server. I am not a guru when it comes to performance or speed issues, but according to a website speed test it seems that it takes quite a long time to connect to the website. Speed Test Results Can someone suggest something or give me some tips, the website address is http://www.n1bar.com

    Read the article

  • Social Analytics and the Customer

    - by David Dorf
    Many successful retailers put the customer at the center of everything they do, so its important that the customer is modeled correctly across all their systems.  The path to omni-channel starts and ends with the customer so at ARTS, our next big project is focused on ensuring a consistent representation of customers across our transactional data model, datawarehouse model, and XML schemas.  Further, we've started a new whitepaper that describes how Big Data and Social Media Analytics should be leveraged by retailers to add and additional level of customer insight. Let's start by taking a closer look at the meaning of social analytics.  Here's my definition: Social Analytics, in the retail context, describes the analysis of data obtained from social media sources in an effort to better comprehend and interact with the community of consumers.  This discipline seeks to understand what’s being said by the community about brands and products (“monitoring”), as well as understand the behaviors of those in the community (“profiling”).  The results are used to enforce the brand image, improve product decisions, and better focus marketing, all of which lead to increased sales. To help illustrate the facets of social analytics, I drew the diagram below which was originally published by Retail Touchpoints. There are lots of tools on the market that allow retailers to monitor social media for brand and product mentions.  These include analysis of sentiment, reach, share of voice, engagement, etc.  When your brand is mentioned, good or bad, its an opportunity to engage with the customer and possibly lead to a sale.  Because products are not always unique, its much more difficult to monitor product mentions, but detecting product trends early can help a retailer make better merchandising decisions, especially in fashion. Once a retailer understands what's being said, the next step is learn more about who's saying it.  That involves profiling customers beyond simple demographics to understand their motivations.  Much can be learned from patterns, and even more when customers voluntarily share their data.  Knowing that a customer is passionate about, for example, mountain biking allows the retailer to make relevant offers on helmets, ask for opinions on hydration, and help spread marketing messages. Social analytics has many facets that benefit retailers, some of which are easy but many of which are hard.  Its important for the CMO and CIO to work closely together to plan for these capabilities and monitor the maturity of tools on the market.  This is an area that will separate winners from losers.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • WPF and Composite Application Library &ndash; Missing The Point

    - by David Totzke
    I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still. Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves. Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns. It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind. Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library. In simple terms then; YOU’RE DOING IT WRONG!   Dave Just because I can…

    Read the article

  • Facebook Stories for Retailers

    - by David Dorf
    Getting people to "like" a brand is important because it opens the door to a possible B2C relationship. Once a person likes that brand, the brand can post to their newsfeed with promotions, announcements, and surveys. At least for me, I "hide" the noisy brands and just monitor the ones that keep posts under 4 times a week. I see lots of people, especially with fashion brands, comment on postings at which point the posting is seen by their network. A metric I've heard (but not verified) is that for every person that comments, ten of their friends see the original posting. That's a pretty cheap way to communicate to potential customers in a viral way. Over at mainstreet.com they compiled the a list of the top liked retailers on Facebook as of Feb 1, 2011. They are listed below: 19,414,892 Starbucks 11,302,939 Victoria's Secret 7,925,184 Zara 7,032,398 McDonald's 6,117,222 H&M 5,400,586 Taco Bell 4,665,760 Subway 4,494,849 Lacoste 4,185,570 Hollister 3,973,181 Forever 21 So I guess the public likes their fast-food and fashion. To take this to the next level, Facebook is now displaying Sponsored Stories, which I saw for the first time on my page this weekend. I found this picture at the Wall Blog that depicits Sponsored Stories very well. Over on the right-hand column of a person's page, where they see advertisements and such, Facebook will post stories involving their network of friends and their interaction with sponsored brands. Now their "likes" can suddenly become your ads. "Jessica and Philip like Starbucks. What are you waiting for?" This is another great way to take messages viral by accessing social graphs. As usual there will be a certain level of outcry from privacy advocates, but given the other more iniquitous issues, I believe this will fall by the wayside. Retailers should consider using Sponsored Stories to increase their Likes, and thus increase their voice in the social world.

    Read the article

  • Nautilus is really slow

    - by david
    Nautilus used to be lightning fast.Now it's dead slow. I have tried upgrading the video card but that does not seem to be the problem. Also I found that there was a problem with the Dropbox uninstall, finally I replace it with pcmanfm which appears to be much faster but the down side is that I no longer have the gwibber social client and a lot of stuff don't work like they use to. Also I completely removed nautilus and couldn't even login to ubuntu to install again nautilus.How can I remove or repair nautilus, and use pcmanfm instead? I have a dell inspiron 1440 3gb ram / disk 300gb / Additionally, why does nautilus run fast when I run it as root? There are no USB devices attached to my computer. Thanks in advance

    Read the article

  • How far is too far?

    - by David Dorf
    Previously I've talked about Safeway's personalized pricing as well as Target's use of analytics to learn about customers.  Then last week I read about Orbitz tailoring their hotel offers based on the browser used.  (Orbitz claims that Mac users are 40% more likely than PC users to book four- or five-star hotels.)  So just how far is too far when tailoring the retail experience? When most consumers read about these types of tactics, they tend to feel violated, as if someone was reading their personal diary.  Nobody wants to be tricked into buying things.  Walking into a grocery store and seeing crates of apples stacked high looks enticing, but the crates are just for display and the apples may be over a year old.  Even though its much cheaper to print markdown tags, many retailers manually write the price tags because consumers think they deal is better if the price is hand-written. The technology already exists to personalize prices and experiences for consumers.  People get upset thinking they paid more for something than a neighbor, but it already happens all the time with cars, flights, and the use of loyalty programs and coupons. There are many variables at play for any purchase.  They only difference is that the customer segments are getting smaller, sometimes reaching a size of one. There's two ways to look at this.  Retailers have always manipulated the environment to get consumers to buy more -- or -- Retailers are getting better at tuning the shopping experience for consumers.  I choose the latter, and so do most consumers by spending their money in the stores they like.  Consumers like to see fresh flowers at the entrance to the grocery store, and they like to see specials scrawled on chalkboards. The key is making sure that consumers benefit from the experience as well.  I'm willing to give up some personal information in exchange for discounts and more relevant marketing, and the next-generation of shoppers are even less concerned about privacy.  Retailers need to use all the tools available to differentiate their offers and connect with their customers. So if Orbitz wants to put three-star hotels at the top of the list for me because I'm using a PC, that's fine by me.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >