Search Results

Search found 67506 results on 2701 pages for 'management data warehouse'.

Page 26/2701 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • Get aggregated view of data for entire website with Google Analytics

    - by crmpicco
    I have a website (www.ayrshireminis.com), which has three main sections under different directories, these are: /forum /galleries /contact I would like to have an aggregated view of the data for the whole website, but also for each section. What is the recommended approach for doing this? I believe I can create a web property that includes a profile for the entire website and duplicated filtered profiles, each section having an include filter. This is my gut instinct, but i'd like to know if there is another (better) way to do it? Maybe by having one account that includes a profile for the whole site and another profile with an include filter for the individual sections?

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • Mount external HD ubuntu 12.10

    - by Luigi Tiburzi
    Although it's an abundantly treated matter, I'm unable to find an answer valid for my needs. I had a 12.04 installation of ubuntu and I decided to install the 12.10. I copied (using GParted) the partition where my system was to an external hd where there is a windows partition. Then I installed the newest ubuntu version and now I want to take back some files (for example my .emacs) from that partition but when I try to mount it, it is not found as sdb and if I mount it from /dev/usb/hddev0 I don't get any output, only a blinking cursor, no errors, no output. I even tried to mount it as an ntfs disk but the result was the same. It's like the hd cannot be detected. So how can I access data to that disk? Could I get them from GParted terminal instead of Ubuntu one? Thanks

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • Using Ubuntu to recover data from a crashed Windows install

    - by user289391
    I was using Windows on my laptop when suddenly the blue screen of death appeared and then laptop restarted and wrote for me this : Intel UNDI, PXE-2.1 (built 083) Copyright (C) 1997-200 Intel Corporation This Product is covered by one or more of the following patents: US5,307459, US5,434,872, US5732,094, US6579,884, US6115,776 and US6,327,625 Realtek PCIe FE Family Controller Series v120 (01/26/10) PXE-M0F: ExitingPXEROM. reboot failed I have Ubuntu on an external disk so I have now booted to that. Two questions: Any theories on what happened? How can I use Ubuntu to recovers my data from Windows install?

    Read the article

  • I can't see how a mature agile team requires any *management*?

    - by ashy_32bit
    After a recent heated debate over Scrum, I realized my problem is that I think of management as a quite unnecessary and redundant activity in a fully agile team. I believe a mature Agile team does not require management or any non-technical decision making process of whatsoever. To my (apparently erring) eyes it is more than obvious that the only one suitable and capable of managing a mature development team is their coach (and that being the most technically competent colleague with proper communication skills). I can't imagine how a Scrum master can contribute to such a team. I am having a great difficulty realizing and understanding the value of such things as Scrum and manager as someone who is not a veteran developer but is well skilled in planning the production cycles when a coach exists in the team. What does that even mean? How on earth someone with no edge-skills of development can manage a highly technical team? Perhaps management here means something else? I see management as a total waste of time and a by-product of immaturity. In my understanding a mature team is fully self-managing. Apparently I'm mistaken since many great people say the contrary but I can't convince myself.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • best way to statistically detect anomalies in data

    - by reinier
    Hi, our webapp collects huge amount of data about user actions, network business, database load, etc etc etc All data is stored in warehouses and we have quite a lot of interesting views on this data. if something odd happens chances are, it shows up somewhere in the data. However, to manually detect if something out of the ordinary is going on, one has to continually look through this data, and look for oddities. My question: what is the best way to detect changes in dynamic data which can be seen as 'out of the ordinary'. Are bayesan filters (I've seen these mentioned when reading about spam detection) the way to go? Any pointers would be great! EDIT: To clarify the data for example shows a daily curve of database load. This curve typically looks similar to the curve from yesterday In time this curve might change slowly. It would be nice that if the curve from day to day changes say within some perimeters, a warning could go off. R

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Webbased data modelling and management tool

    - by pixeldude
    Is there a web-based tool available, where I am able to... ...define data models (like in a database admin tool) ...fill in data (in custom web forms, not too generic) with basic features like completion ...import data from CSV oder Excel Sheets ...export data to CSV or SQL ...create snapshots of my data models (versions, diff, etc.) ...share my data models ...discuss/collaborate with other people about my data models Well, I can develop something like this in PHP or with Ruby or whatever. But this is such a common task, where the application support could be a lot better. And it would be language and database independent. This would help to maintain data models in different versions and you can maybe share your data models with others, extend it with your team members, etc. There is a website called FreeBase, which allows you to define a data entity model and fill in data, which also has export features, but I need to define my own data model with my own granularity and structure. And it should not be shared in public if I don't want to. How do you solve problems like this yourself?

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Oracle Adattárház Referencia Architektúra, a legjobb gyakorlatból

    - by Fekete Zoltán
    Hogyan építsünk adattárházat, hogyan kapcsoljuk össze a rendszereinkkel? Mi legyen az az architektúra, mellyel a legkisebb kockázattal a legbiztosabban érünk célba? Ezekre a kérdésekre kaphatunk választ az Oracle Data Warehouse Reference Architecture leírásból. Letöltheto a következo dokumentum: Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture

    Read the article

  • Excel Conditional Formatting Multiple Data Bars and Data Icons in one cell

    - by wbeard52
    I am using Excel 2007 on a windows machine. I am attempting to place one data bar and one data icon into a cell under the conditional formatting. The issue is that I don't really want to have data icons or data bars for cells that have dates in the future and I only want to have data icons for dates in the at least one month in the past. This is what I have: This is what I want: I am using the EOMONTH function to determine the last day of the month for the conditional formatting calculations. For the data bar the formula is =EOMONTH(Now(), 4) and =EOMONTH(Now(), -1). The data icons formulas are =EOMONTH(Now(), -1) and =EOMONTH(Now(), -2) Is there a way in Excel 2007 to get rid of the data icons for all the dates in the future and lose the data bars when the date has past. Thanks

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • Ubuntu Tools for recovering data from damaged USB Flash Drive ~ 10 Gb

    - by PREDA LUCIAN
    I have technical issues with my USB Flash Drive - JetFlash®V15 (TS16GJFV15) It's very critical situation because I can not see the data from it and I should get a way to recover them ASAP. So, in general, I have connected Non-stop that USB Flash Disk at my laptop. Was appear Power surges and when I was coming back, I saw that problem with it. Details regarding JetFlash®V15 (in present): - when I connect it on USP slot, the led is working intermittent and later on remain with constant light. - if I inspect the computer drivers, I found "Generic USB Flash Disk" (when the stick it's connected). - if I inspect "Properties", I can see next details: --- Type: unknown (application/octet-stream) --- Size: unknown --- Volume: unknown --- Accessed: unknown --- Modified: unknown I inspected that stick on 2 different computers (as well in different different USB Ports) and was the same problem, I can not see the content. I was checking with Windows 7 and Ubuntu 10.04 OS, but without success. With both OS was working before this issue. I'll appreciate an answer which will solve the problem, not an answer which will certify the problem. What I have to do, to recover the information form it (nearly 10 Gb)? I'm looking forward to be guided from a technical expert.

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • Data structure for grid with negative indeces

    - by The Secret Imbecile
    Sorry if this is an insultingly obvious concept, but it's something I haven't done before and I've been unable to find any material discussing the best way to approach it. I'm wondering what's the best data structure for holding a 2D grid of unknown size. The grid has integer coordinates (x,y), and will have negative indices in both directions. So, what is the best way to hold this grid? I'm programming in c# currently, so I can't have negative array indices. My initial thought was to have class with 4 separate arrays for (+x,+y),(+x,-y),(-x,+y), and (-x,-y). This seems to be a valid way to implement the grid, but it does seem like I'm over-engineering the solution, and array resizing will be a headache. Another idea was to keep track of the center-point of the array and set that as the topological (0,0), however I would have the issue of having to do a shift to every element of the grid when repeatedly adding to the top-left of the grid, which would be similar to grid resizing though in all likelihood more frequent. Thoughts?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >