Search Results

Search found 58465 results on 2339 pages for 'ephemeral data'.

Page 89/2339 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • PostgreSQL data diff

    - by skanatek
    Note: this question is not about syncing database schema/structure Problem In my web application I have a PostgreSQL database server (PGS) and a (separate machine) business logic server (BLS) which regularly (every minute or two) queries 'SELECT ALL' against PGS. The problem is that the 'SELECT ALL' query can easily return 50-200 MB each time. It is obvious that it would be not so good architecture-wise to transfer so much data so frequently over the web. Possible solution What I would like to do is to run some diff tool on PGS and compare the new query with the previous query (all this should be done on PGS). Once the comparison is done I would like to get a dump from PGS and transfer it to BLS. I expect that a diff-based dump would be much, much smaller than the whole 'SELECT ALL' query. Question Is there any data diff tool for PostgreSQL that can do diffs that compare PostgreSQL data between 2 tables or 2 dumps? Note: I would prefer some open-source software tool.

    Read the article

  • Exposing warnings\errors from data objects (that are also list returned)

    - by Oren Schwartz
    I'm exposing Data objects via service oriented assembly (which on future usages might become a WCF service). The data object is tree designed, as well as formed from allot of properties.Moreover, some services return one objects, others retrieve a list of them (thus disables throwing exceptions). I now want to expose data flow warnings and wondering what's the best way to do it having to things to consider: (1) seperation (2) ease of access. On the one hand, i want the UI team to be able to access a fields warnings (or errors) without having them mapping the field names to an external source but on the other hand, i don't want the warnings "hanged" on the object itself (as i don't see it a correct design). I tought of creating a new type of wrapper for each field, that'll expose events and they'll have to register the one's they care about (but totally not sure) I'll be happy to hear your thoughts. Could you please direct me to a respectful design pattern ? what dp will do best here ? Thank you very much!

    Read the article

  • Using Sql Server Change Data Capture with a frequently changing schema

    - by Pete
    We are looking into enabling Sql Server Change Data Capture for a new subsystem we are building. It's not really because we need it, but we are being pushed for having a complete history traceability, and CDC would nicely solve this requirement with minimum effort on our parts. We are following an agile development process, which in this case means that we frequently make changes to the database schema, e.g. adding new columns, moving data to other columns, etc. We did a small test where we created a table, enabled CDC for that table, and then added a new column to the table. Changes to the new column is not registered in the CDC table. Is there a mechanism to update the CDC table to the new schema, and are there any best practices to how you deal with captured data when migrating the database schema?

    Read the article

  • Webcast: Oracle Loans Overview - Features, Demonstration & Data Model

    - by Annemarie Provisero
    Webcast: Oracle Loans Overview - Features, Demonstration & Data ModelDate:  November 13, 2013 at 10 am ET, 9 am CT, 8 am MT, 7 am PTCome learn about Oracle Loans features & data model.  This one-hour session is recommended for technical and functional users who use or are planning to use Oracle Loans.Topics will include:     Definition and feature summary     Key business concepts for Oracle Loans     Direct Loans demonstration     Introduction to the Loans data model Bring your questions! For more details on how to register, see Doc ID 1590843.1.  Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 7409661.1.

    Read the article

  • ASP -response-flush-flushes-partial-data

    - by Anshu
    I am developing a web app with an ASP server side and I use an iframe for data push. An ASP handler flushes every once in a while some javascript to the iframe: context.Response.Write("<script language='javascript'>top.update('lala');</script>"); context.Response.Flush(); My problem is that sometimes, when I receive the data, I don't get the full text. For example I will receive this : update('lala'); One workaround I have is to have a thread flushing '..........' every 500ms. (Then I will receive script...... which will complete my javascript.) However I am sure there must be a way to have Response.Flush() sending the whole chunk of data. Does someone have an idea on how to use properly Response.Flush() ? Thank you!

    Read the article

  • Dynamic Quad/Oct Trees

    - by KKlouzal
    I've recently discovered the power of Quadtrees and Octrees and their role in culling/LOD applications, however I've been pondering on the implementations for a Dynamic Quad/Oct Tree. Such tree would not require a complete rebuild when some of the underlying data changes (Vertex Data). Would it be possible to create such a tree? What would that look like? Could someone point me in the correct direction to get started? The application here would, in my scenario, be used for a dynamically changing spherical landscape with over 10,000,000 verticies. The use of Quad/Oct Trees is obvious for Culling & LOD as well as the benefits from not having to completely recompute the tree when the underlying data changes.

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • Alternatives for saving data with jquery

    - by Phil Vallone
    I am not sure if this question is considered too broad, but I would like to reach out to my fellow programmers to see what alternatives are out there for saving data using jquery. I have a content management system that generates an set of HTML pages called an IETM (Interactive Electronic Technical Manual). The HTML pages are written in HTML and uses jquery. The ITEM is meant to be light weight, portable and run on most modern browsers. I am looking for a way to save data. I have considered cookies and sqlite. Are there any other alternatives for saving data using jquery?

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Off-site Cardholder Data Storage

    - by LinuxGnut
    Is there a service or site out there that will store cardholder data for me? I don't need any kind of transaction processing or recurring billing... I just need somewhere that I can store data on until someone in my company is able to look at it. The specific need is allowing customers to input data that will be used for credit checks. Name, Address, Credit Card(s), and the such. Google Checkout, PayPal, NetSuite, and Authorize.net seem to be what everyone suggests to me, but they don't offer what I need -- they're just payment gateways.

    Read the article

  • Retail Link data storage requirements

    - by Randy Walker
    I was asked today about how much data an average Retail Link analyst (Walmart vendor) would consume.  I thought I would write this small post for future reference. Of course this vastly depends on the amount of skus, how long you want to archive data, and if you want store level sales. Most reports take up very little space. Most times when you download a report (total sales per sku for last week), you will overwrite the previous week’s report.  However, most users will take the data inside their downloaded report, and add it to a database or larger excel spreadsheet.  This way, the user has a history of the sales of each item/sku per week over the last 2+ years.  I would estimate 1 user to consume around 1-2 gb of space, at most, over the course of 2 years. If you start archiving store level sales those numbers can drastically increase up to 10gb or more very quickly.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • "Adoption des Big Data : ce n'est que le commencement", selon Talend, qui analyse cette nouvelle tendance

    « Adoption des Big Data : ce n'est que le commencement », selon Talend qui affirme que les entreprises mettent en place des stratégies de Big Data Les volumes de données augmentent à un rythme croissant. De plus en plus, les entreprises explorent leurs usages et trouvent des moyens pour traiter, exploiter, analyser et fouiller les données qu'elles collectent, afin d'en tirer les connaissances qui serviront de base à leurs décisions futures. Yves de Montcheuil, VP Marketing, Talend, livre son analyse suite à une nouvelle enquête sur l'adoption des Big Data réalisée par l'éditeur auprès de professionnels impliqués dans la délivrance de solutions de données, qui confirme cette mat...

    Read the article

  • Unallocated space with important data

    - by Chethan S.
    I used GParted to convert a primary partition to extended one after copying the data to another partition. After having the extended partition I moved the data back. To my utter shock after a restart I found out that the new extended partition did convert into "unallocated space". I tried installing testdisk. Testdisk could identify the partition as a primary partition and not the newly created extended partition. So what should I do now? I badly want the data back.

    Read the article

  • My user can't upload files to folders owned by www-data

    - by Thomas Gautvedt
    I think I have screwed up my permissions in Ubuntu. I am using my server to run PHP. I recently ran across a problem where PHP could not create directories in the var/www-directory, so I searched around on the internet. Now PHP can write and access anything like it should, but as a user, I can't create new folders or files anymore. Right now, the permissions for folders are like this: drwxrwsr-x 2 www-data www-data [Folders] This is the permissions when I upload using sftp: -rw-rw-r-- 1 gautvedt www-data [Folders] What have I done wrong and how can I change this?

    Read the article

  • Essbase Data precision unraveled

    - by THE
    (guest reference added by Nancy) Anyone who has been working with Data import and exoport as well as the Essbase Excel Add In has probably come across a phenomenon that is called data precision: Lots of zeroes are added to any given number that has been calculated by Essbase, and this gets displayed as "10.0000000000001" or "9.99999999999999" instead of a simple "10" . This question is one of the recurring ones that Support get asked over and over again, and we therefore feel the need to give an explanation to it: I would like to point you to the note The Limits of Data Precision in Essbase (Doc ID 1311188.1) which explains in detail why these numbers are showing up and what to do about it.

    Read the article

  • ODAC 11.2 Release 3(11.2.0.2.1)?????:64bit?ODP.NET?TimesTen????

    - by Yusuke.Yamamoto
    ODAC 11.2 Release 3(11.2.0.2.1) ??????????? Oracle Data Provider for .NET(ODP.NET) ?? Oracle Providers for ASP.NET ?64bit??????????????? ????????·????????? Oracle TimesTen In-Memory Database ??????????????? Oracle Data Access Components(ODAC) ?????? Oracle Data Access Components(ODAC) ?????? ??? .NET ????????? .NET ?????????????????? Oracle Database ????????? ???????/???1????!.NET + Oracle Database 11g ???????????? ?????????? .NET|???????????

    Read the article

  • Google BigQuery - Best Practices for Loading your Data and open Office Hours

    Google BigQuery - Best Practices for Loading your Data and open Office Hours Michael Manoochehri and Ryan Boyd from the DevRel team for cloud data services will be streaming to you live! They'll be discussing how to load your data into BigQuery and the various options available -- from commercial ETL tools to App Engine's Pipeline API and MapReduce frameworks, to simple UNIX command-line tools. They'll then open it up for a general office hours on ingestion and other topics. Please use the moderator link to ask your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Web-based data generator

    - by John Paul Cook
    One of my coworkers told me about Mockaroo , a web-based data generator. I needed some test data for upcoming blog posts, so I decided to give it a try. It’s pretty good. I had to use Firefox because of problems running Mockaroo on Internet Explorer 11. Using the defaults except for changing the format to SQL, it generated output that looked something like the following. Mockaroo is so good that it generates fake data that could accidentally be real, such as email addresses. Consequently, I edited...(read more)

    Read the article

  • Best way of accessing data on different pages

    - by Gaz83
    I'm looking for a way to load data into properties/variables etc and have this information accessible to all the pages of my app. I want the information to be loaded via a background thread to keep UI thread free. Some of the pages will have various properties of their controls binding to these global properties. Here is what I tried. Created a static class. All pages could access the data but can't bind. Changed the static class to a Singleton and used DependencyProperty's. All pages could access data and binding worked fine but cross-threading issues when accessing via background threads. I have read in various places on this subject but haven't really come up with the best method yet for my situation.

    Read the article

  • Google I/O 2012 - Crunching Big Data with BigQuery

    Google I/O 2012 - Crunching Big Data with BigQuery Jordan Tigani, Ryan Boyd Google BigQuery is a data analysis tool born from Google internal technologies. It enables developers to analyze terabyte data sets in seconds using a RESTful API. This session will dive into best practices for getting fast answers to business questions. We'll provide insight into how we process queries under the hood and how to construct SQL queries for complex analysis. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 01:03:04 More in Science & Technology

    Read the article

  • Webcast: AutoInvoice Overview & Data Flow

    - by Annemarie Provisero-Oracle
    Webcast: AutoInvoice Overview & Data Flow Date: June 4, 2014 at 11:00 am ET, 9:00 am MT, 4:00 pm GMT, 8:30 pm IST This one-hour session is part one of a three part series on AutoInvoice and is recommended for technical and functional users who would like a better understanding of what AutoInvoice does, required setups and how data flows through the process. We will also cover diagnostic scripts used in with AutoInvoice. Topics will include: Why Using AutoInvoice? AutoInvoice Setups Data flow Diagnostic tools Details & Registration: Doc ID 1671931.1

    Read the article

  • BigQuery: Simple example of a data collection and analysis pipeline + Your questions

    BigQuery: Simple example of a data collection and analysis pipeline + Your questions Join Michael Manoochehri and Ryan Boyd live to talk about Google BigQuery. We'll give an overview of how we're using our cars, phones, App Engine and BigQuery to collect and analyze data. We'll be discussing our trusted tester feature which allows analyzing data from the App Engine datastore. We'll also review some of the more interesting questions from Stack Overflow and take questions via Google Moderator. From: GoogleDevelopers Views: 250 16 ratings Time: 26:53 More in Science & Technology

    Read the article

  • Master Data Management - The Trend Towards Multi-Domain and Other Realities

    - by Mala Narasimharajan
    In my quest to keep my fingers on the pulse of MDM, I recently found a pretty interesting article.  The article was published in Information Week and provides some interesting statistics from a recent survey conducted by the analyst firm, The Information Difference.  Let's take a look: Of the 130 organizations surveyed, 53% have live operational MDM implementations 81% of those with live operational MDM implementations report broad success - a huge improvement over 2011's 54% 64% developed a business case prior to their MDM deployment, while a daring 32% went ahead without a business case.    The article goes on to talk about the shift in vendors from focusing on customer data and product information management to one that is oriented around multi-domain master data management as well as other realities around MDM.  Take a look at the article. For more information on Oracle's master data management suite, click here. 

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >