Search Results

Search found 3150 results on 126 pages for 'integrating marketing'.

Page 92/126 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Facebook Javascript SDK: First time logging in

    - by Brandon
    I'm currently integrating Facebook with my website using the Javascript SDK. I've got the login portion working well. The only thing I'm trying to figure out is if there is a way to tell if it was the first time the user logged into my website using their Facebook credientials. I tried subscribing to auth.login, but that didn't seem to have any information about that. Is there a flag anywhere that lets me know this? Or another way to go about looking this up? I realize I could do some server side code, but I'd prefer to stay away from that if possible. Thanks in advance, Brandon

    Read the article

  • Cloud e-mail and portal integration: experiences?

    - by Mark McLaren
    I am evaluating cloud e-mail solutions based upon: Google Apps for Education Microsoft Live@edu I work for a University and we currently have an institutional portal (based on uPortal). We currently have our local IMAP server and webmail client fully integrated with the portal. We would like to replicate the current portal e-mail experience with the new e-mail services. At present users can see a snapshot of their inbox in the portal and click through into the appropriate place in the webmail client. We expect that we need to solve similar problems when integrating with the cloud based e-mail solutions. We need to solve the single sign-on (SSO) problem. We need to be able to access the inbox messages on the users behalf. (e.g. proxy authentication) Does anybody have an experience or advice on this? Many thanks, Mark

    Read the article

  • SHAddToRecentDocs without a file?

    - by Chris Becke
    I was toying with an IRC client, integrating it with the windows 7 app bar. To get a "Frequent" or "Recent" items list one has to call SHAddToRecentDocs API. I want to add recent IRC channels visited to the Windows 7 Jumplist for the IRC application. Now, my problem is, IRC channels don't exist in the file system. And SHAddToRecentDocs seems to insist on getting some sort of file system object. Ive tried to work around it by creating a IShellItem pointing to my application, and giving it a command line to launch the channel. The shell is rebelling however, and thus far has not visibly added any of my "recent document" attempts to the Jumplist. Is there no way to do this without creating some kind of entirely unwanted filesystem object?

    Read the article

  • Sharepoint checkin/checkout

    - by Prashanth
    We have a sharepoint based application that uses a custom database for storing metadata/files (which could also be on a file share) My question is how can the standard file checkin/check out option in document library be customized? The javascript file ows.js in the layouts folder contains the functions that provide checkin/check out/ open file functionality. Behind the scenes it relies on a combination of HTTP Post/GET methods + SOAP + an activeX control to achieve the desired functionality. Customizing these javascript function seems tedious/error prone. Note that we have a web service that exposes endpoints, for retrieving necessary file information/data from the backend. The difficulty is in integrating it with the sharepoint js functions, due to lack of proper documentation. (Also the js functions might change over different versions of sharepoint) Also is it possible to create files/open files etc from the cache area on the client machine from server side code?

    Read the article

  • Restlets with Google App Engine, Java Server Pages, (JSP's), and Shiro authentication

    - by DutrowLLC
    I'm having difficulty integrating Restlets into my project. I'm using google app engine (GAE) and I also have some java server pages (JSPs) set up. The JSP's never seem to work at the same time as the Restlets, should I only be using one or the other in GAE? I'm also using Shiro (formerly Ki, formerly JSecurity) and I have been unable to get Restlets to work with Shiro's filter for authentication. Are there any issues in particular that I should be aware of? What are other people using to secure restlet apps on GAE? Is Shiro overkill if I just need authentication and some role-based authorization? Thanks so much! Chris

    Read the article

  • Reapplying changelist in perforce

    - by Niklas
    I'm rather new to perforce, but have quite a bit of other VCS experience... Imagine this: You submit changes (changelist 1) A colleague submits changes on the same branch, accidentally overwriting your changes. (changelist 2) I tried integrating (which P4V refuses to do since it's already integrated) and looked around for a way to just generate a patch that I could apply, but couldn't find anything. For now, I will check out the versions in question and use an external merge tool, but it would be great to know if perforce supports this somehow. Is there a way using the perforce tools (preferably in P4V) to reapply changelist 1?

    Read the article

  • How to render a Partial from a Model in Rails 2.3.5

    - by empire29
    I have a Rails 2.3.5 application and Im trying to render several Partials from within a Model (i know, i know -- im not supposed to). The reason im doing this is im integrating a Comet server (APE) into my Rails app and need to push updates out based on the Model's events (ex. after_create). I have tried doing this: ActionView::Base.new(Rails::Configuration.new.view_path).render(:partial => "pages/show", :locals => {:page => self}) Which allows me to render simple partials that don't user helpers, however if I try to user a link_to in my partial, i receive an error stating: undefined method `url_for' for nil:NilClass I've made sure that the object being passed into the "project_path(project)" is not nil. I've also tried including: include ActionView::Helpers::UrlHelper include ActionController::UrlWriter in the Module that contains the method that makes the above "render" call. Does anyone know how to work around this? Thanks

    Read the article

  • Delete or comment out non-working JUnit tests?

    - by Chris Knight
    I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I: 1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them 2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore 3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base

    Read the article

  • Is it bad practice to extend the MongoEngine User document?

    - by Soviut
    I'm integrating MongoDB using MongoEngine. It provides auth and session support that a standard pymongo setup would lack. In regular django auth, it's considered bad practice to extend the User model since there's no guarantee it will be used correctly everywhere. Is this the case with mongoengine.django.auth? If it is considered bad practice, what is the best way to attach a separate user profile? Django has mechanisms for specifying an AUTH_PROFILE_MODULE. Is this supported in MongoEngine as well, or should I be manually doing the lookup?

    Read the article

  • Connecting to multiple firebird Databases via Delphi

    - by Branden
    I am integrating a system with 2 other applications, 1 using a Firebird database whilst the other BIS (using ADO). My delphi application uses Firebird. I need to read data from my database, insert it into both the BIS database and the other application firebird database. I have created seperate data modules for each. Sending data to the ADO works fine, but when writing to the other Firebird DB (my db still open) I get strange errors. I have managed to isolate the problem to the second firebird DB. Small data writes seems fine. The data structures are completly different, so un able to use a synch tool. is there a way to overcome this by using multi threading or seperate memory space each Firebird instance uses?

    Read the article

  • RAR Decompress on iPhone Device

    - by DenVog
    Has anyone found a solution to decompressing RAR files on iPhone (not Jailbroken)? It would be great if there was a library similar to libz. I did not find anything official from [URL="http://www.rarlab.com/"]RARlab[/URL]. Anyone successful with integrating [URL="http://wdtz.org/iphone/unrar/"]unRAR[/URL] into their app? The source code appears to be C++? I know that this has been asked before, but I've not seen a solution other than porting unRAR which is beyond me. Would greatly appreciate any information or suggestions. Thank you.

    Read the article

  • Shipping Integration into a Rails App

    - by MikeH
    I'm working on integrating a shipping solution into a Rails ecommerce app. We're only going to use one shipping provider. So the question is: Fedex or UPS? I'm wondering what Rails developers think about the tech side of this question. What do you think about the APIs, ease of integration, focus on developer's needs between Fedex and UPS? I was leaning towards Fedex, but from looking at the developers resources sections of both sites, it seems that UPS might be more developer friendly. Also, I'm going to be using Shopify's active_shipping gem: http://github.com/Shopify/active_shipping And I also based my app off the Spree Ecommerce solution, but I don't think that's particularly relevant to the question. Spree wrote a wrapper to integrate active_shipping with the Spree system. I gave away all my points, so SO wont' let me post another link in this question. But if you google "Spree active-shipping", their wrapper on github is the first result. Thanks.

    Read the article

  • How to debug a GWT application running on OSGi?

    - by Jaime Soriano
    I'm developing a web UI using GWT. While working only with the widgets I could debug from Eclipse using the Firefox extension, but now that I'm integrating the UI with other OSGi bundles I cannot use this solution. For deploying the GWT application I create the .war and convert it to an OSGi bundle using BND. Then I launch the OSGi container with all the bundles using Pax Runner and Pax Web and the application works correctly, but when something fails in the generated javascript code I don't have any decent output error or debugging facility. Is there any way to launch the GWT application in "debug mode" from OSGi? Any other idea that could help in this scenario?

    Read the article

  • PASS: The Budget Process

    - by Bill Graziano
    Every fiscal year PASS creates a detailed budget.  This helps us set priorities and communicate to our members what we’re going to do in the upcoming year.  You can review the current budget on the PASS Governance page.  That page currently requires you to login but I’m talking with HQ to see if there are any legal issues with opening that up. The Accounting Team The PASS accounting team is two people.  The Executive Vice-President of Finance (“EVP”) and the PASS Accounting Manager.  Sandy Cherry is the accounting manager and works at PASS HQ.  Sandy has been with PASS since we switched management companies in 2007.  Throughout this document when I talk about any actual work related to the budget that’s all Sandy :)  She’s the glue that gets us through this process.  Last year we went through 32 iterations of the budget before the Board approved so it’s a pretty busy time for her us – well, mostly her. Fiscal Year The PASS fiscal year runs from July 1st through June 30th the following year.  Right now we’re in fiscal year 2011.  Our 2010 Summit actually occurred in FY2011.  We switched to this schedule from a calendar year in 2006.  Our goal was to have the Summit occur early in our fiscal year.  That gives us the rest of the year to handle any significant financial impact from the Summit.  If registrations are down we can reduce spending.  If registrations are up we can decide how much to increase our reserves and how much to spend.  Keep in mind that the Summit is budgeted to generate 82% of our revenue this year.  How it performs has a significant impact on our financials.  The other benefit of this fiscal year is that it matches the Microsoft fiscal year.  We sign an annual sponsorship agreement with Microsoft and it’s very helpful that our fiscal years match. This year our budget process will probably start in earnest in March or April.  I’d like to be done in early June so we can publish before July 1st.  I was late publishing it this year and I’m trying not to repeat that. Our Budget Our actual budget is an Excel spreadsheet with 36 sheets.  We remove some of those when we publish it since they include salary information.  The budget is broken up into various portfolios or departments.  We have 20 portfolios.  They include chapters, marketing, virtual chapters, marketing, etc.  Ideally each portfolio is assigned to a Board member.  Each portfolio also typically has a staff person assigned to it.  Portfolios that aren’t assigned to a Board member are monitored by HQ and the ExecVP-Finance (me).  These are typically smaller portfolios such as deferred membership or Summit futures.  (More on those in a later post.)  All portfolios are reviewed by all Board members during the budget approval process, when interim financials are released internally and at year-end. The Process Our first step is to budget revenues.  The Board determines a target attendee number.  We have formulas based on historical performance that convert that to an overall attendee revenue number.  Other revenue projections (such as vendor sponsorships) come from different parts of the organization.  I hope to have another post with more details on how we project revenues. The next step is to budget expenses.  Board members fill out a sample spreadsheet with their budget for the year.  They can add line items and notes describing what the amounts are for.  Each Board portfolio typically has from 10 to 30 line items.  Any new initiatives they want to pursue needs to be budgeted.  The Summit operations budget is managed by HQ.  It includes the cost for food, electrical, internet, etc.  Most of these come from our estimate of attendees and our contract with the convention center.  During this process the Board can ask for more or less to be spent on various line items.  For example, if we weren’t happy with the Internet at the last Summit we can ask them to look into different options and/or increasing the budget.  HQ will also make adjustments to these numbers based on what they see at the events and the feedback we receive on the surveys. After we have all the initial estimates we start reviewing the entire budget.  It is sent out to the Board and we can see what each portfolio requested and what the overall profit and loss number is.  We usually start with too much in expenses and need to cut.  In years past the Board started haggling over these numbers as a group.  This past year they decided I should take a first cut and present them with a reasonable budget and a list of what I changed.  That worked well and I think we’ll continue to do that in the future. We go through a number of iterations on the budget.  If I remember correctly, we went through 32 iterations before we passed the budget.  At each iteration various revenue and expense numbers can change.  Keep in mind that the PASS budget has 200+ line items spread over 20 portfolios.  Many of these depend on other numbers.  For example, if we decide increase the projected attendees that cascades through our budget.  At each iteration we list what changed and the impact.  Ideally these discussions will take place at a face-to-face Board meeting.  Many of them also take place over the phone.  Board members explain any increase they are asking for while performing due diligence on other budget requests.  Eventually a budget emerges and is passed. Publishing After the budget is passed we create a version without the formulas and salaries for posting on the web site.  Sandy also creates some charts to help our members understand the budget.  The EVP writes a nice little letter describing some of the changes from last year’s budget.  You can see my letter and our budget on the PASS Governance page. And then, eight months later, we start all over again.

    Read the article

  • How to best integrate HTML/design with C# code in ASP.Net or ASP.Net MVC?

    - by LuftMensch
    We're working on a new ASP.Net site. The last major site we did was in classic ASP--the procedure we used there was to have the HTML completed first, then "bring it to life" with the ASP code. In the ASP.Net world, how does this work? I.e. how do the designers do their work if much of the mark-up is actually being generated by the server controls? We are also looking at ASP.Net MVC as a potential lightweight alternative. Would be very interested to know what was worked best for people in both scenarios in terms of working with the designers and integrating their work with the code.

    Read the article

  • creating a wrapper around a 3rd party assembly - swap out and decouple

    - by mrblah
    I have an email component that I am integrating into my application, looking for some tips on how should build a wrapper around it so I can swap it out with another 3rd party component if needed. My approach right now is it: build an interface will the functionality I need. create a class that implements the interface, using my 3rd party component inside this class. any usage of this component will be via the interface so like: IPop3 pop3 = new AcmeIncePop3Wrapper(); pop3.connect(); and inside AcmeIncePop3Wrapper will be: public void connect() { AcmeIncePop3 pop = new AcmeIncePop3(); pop.connect(); } Is that a good approach? I could probably add another abstraction by using ninject so I could swap out implementations, but really this seems to be all I need as i don't expect to be changing 3rd party assemblies every day, just don't want to make things so tightly coupled.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Digital signature integration with software written in java

    - by Serkan Kasapbasi
    hi everyone, i'm extremely rookie on this security field, so please forgive if my questions are dumb. i am asked to convert and migrate couple "Lotus Forms" forms to our software that is written in java. One thing in forms that bother me is digital signatures. These forms can be signed by digital signatures, probably generated by "Silanis Approve-it". as i have said before, i dont have much knowledge about this technology. and strangely couldnt find any tutorial or example of integrating digital signature and java. So what are the possibilities here ? how my code read a digital signature, sign a document with this signature? There should be an API or something that is provided by vendors right :)

    Read the article

  • Why aren't Admob click callback delegate methods getting called?

    - by executor21
    I'm integrating the latest version of the admob sdk (version 20100412) into my app. The ads get displayed, but I need the app to make some changes when an ad is clicked and admob displays a full-screen browser. However, none of the callback methods (willPresentFullScreenModal, didPresentFullScreenModal, willDismissFullScreenModal, and didDismissFullScreenModal) are called, even though other delegate methods are. Why aren't these callbacks being made? They were in the previous versions of the SDK, and the sample app doesn't use them, so it's no help. EDIT: removed the double negative from the question title

    Read the article

  • Architectural decision : QT or Eclipse Platform ?

    - by umanga
    We are in the process of designing a tool to be used with HDEM(High Definition Electron Microscope).We get stacks of 2D images from HDEM and first step is 'detecting borders' on the sections.After detecting edges of 2D slices ,next step is construct the 3D model using these 2D slices. This 'border detecting' algorithm(s) is/are implemented by one of professor and he has used and suggests to use C.(to gain high performance and probably will parallelise in future) We have to develop comprehensive UI ,3D viewer ,2D editor...etc and use this algorithm. Application should support usual features like project save/open.Undo,Redo...etc Our technology decisions are: A) Build entire platform from the scratch using QT. B) Use Eclipse Platform Our concerns are, if we choose A) we can easily integrate the 'border detecting' algorithm(s) because the development environment is C/C++ But we have to implement the basic features from the scratch. If we choose B) we get basic features from the Eclipse platform , but integrating C libraries going to be a tedious task. Any suggestions on this?

    Read the article

  • Building a complete online payment gateway like Paypal

    - by John Stewart
    So this question isn't about integrating an existing payment gateway into my site. This is more of a architectural question. I want to build a system similar to Paypal. Now I understand that Paypal offers a lot of features under the roof and I can't implement all of them at once. I want to implement the core functionality of Paypal and other such services. So my question is (rather discussion is) around how would one go about building such a system. Some points to discuss: Handle payments through existing banks. I am guessing that I would need access to local bank protocols to get this. Allow users to securely store and process their payments How does Paypal handle the transactions? Thoughts?

    Read the article

  • How can I integrate graphs and JPEG images with RAVE Reports?

    - by JonDave of the Philippines
    I really love RAVE Reports in creating multiple reports especially with formulas and accounting systems... but recently I am having problems with integrating JPEG Pictures and Graphs with my newly Delphi Language developed Little ERP System. I bought some JPEG Components but it seems problematic. I also experienced some irregularities with my RAVE Reports now. When I run my program then try to preview some reports, it seems to be running fine, but when I close the program, the EXE file is still in the taskbar. I need to ctrl-alt-delete first for me to use Report Previews normally. If I don't, RAVE REPORT ERROR message will appear everytime I click the PRINT button; it says "STREAM READ ERROR" even though I used to "FreeAndNil" to free the memory stream when Report Preview Form closes. When I tried to run the My Applications without previewing RAVE reports, the program closes perfectly. Any suggestions and recommendations will be an enormous help. Thank you.

    Read the article

  • why my website display in FB Login window After Login through facebook?

    - by Vaibhav Bhalke
    Hi All I am Integrating Face book application with Our Website. My Website is in Java's [Google Web Toolkit] Framework 2.0.1 When we press FB-connect connect then FB's Login window comes, after that user enters email & pwd. When user clicks FB's connect button then Our website display in that Login window. Authentication and Connect Url is Proper http://localhost:8090/websitename/ How to solve this problem ? Where I did mistake ? Is there any solution? When I did same thing in GWT's development/Hosted mode with Authentication and Connect url http:/127.0.0.1:8888/ then works properly. Then why It create Probelm in localhost? :-/

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >