Search Results

Search found 67448 results on 2698 pages for 'data management'.

Page 25/2698 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Excel Conditional Formatting Multiple Data Bars and Data Icons in one cell

    - by wbeard52
    I am using Excel 2007 on a windows machine. I am attempting to place one data bar and one data icon into a cell under the conditional formatting. The issue is that I don't really want to have data icons or data bars for cells that have dates in the future and I only want to have data icons for dates in the at least one month in the past. This is what I have: This is what I want: I am using the EOMONTH function to determine the last day of the month for the conditional formatting calculations. For the data bar the formula is =EOMONTH(Now(), 4) and =EOMONTH(Now(), -1). The data icons formulas are =EOMONTH(Now(), -1) and =EOMONTH(Now(), -2) Is there a way in Excel 2007 to get rid of the data icons for all the dates in the future and lose the data bars when the date has past. Thanks

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • Ubuntu Tools for recovering data from damaged USB Flash Drive ~ 10 Gb

    - by PREDA LUCIAN
    I have technical issues with my USB Flash Drive - JetFlash®V15 (TS16GJFV15) It's very critical situation because I can not see the data from it and I should get a way to recover them ASAP. So, in general, I have connected Non-stop that USB Flash Disk at my laptop. Was appear Power surges and when I was coming back, I saw that problem with it. Details regarding JetFlash®V15 (in present): - when I connect it on USP slot, the led is working intermittent and later on remain with constant light. - if I inspect the computer drivers, I found "Generic USB Flash Disk" (when the stick it's connected). - if I inspect "Properties", I can see next details: --- Type: unknown (application/octet-stream) --- Size: unknown --- Volume: unknown --- Accessed: unknown --- Modified: unknown I inspected that stick on 2 different computers (as well in different different USB Ports) and was the same problem, I can not see the content. I was checking with Windows 7 and Ubuntu 10.04 OS, but without success. With both OS was working before this issue. I'll appreciate an answer which will solve the problem, not an answer which will certify the problem. What I have to do, to recover the information form it (nearly 10 Gb)? I'm looking forward to be guided from a technical expert.

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • Data structure for grid with negative indeces

    - by The Secret Imbecile
    Sorry if this is an insultingly obvious concept, but it's something I haven't done before and I've been unable to find any material discussing the best way to approach it. I'm wondering what's the best data structure for holding a 2D grid of unknown size. The grid has integer coordinates (x,y), and will have negative indices in both directions. So, what is the best way to hold this grid? I'm programming in c# currently, so I can't have negative array indices. My initial thought was to have class with 4 separate arrays for (+x,+y),(+x,-y),(-x,+y), and (-x,-y). This seems to be a valid way to implement the grid, but it does seem like I'm over-engineering the solution, and array resizing will be a headache. Another idea was to keep track of the center-point of the array and set that as the topological (0,0), however I would have the issue of having to do a shift to every element of the grid when repeatedly adding to the top-left of the grid, which would be similar to grid resizing though in all likelihood more frequent. Thoughts?

    Read the article

  • Data model for unlockable card collection

    - by Karlos Zafra
    My game has a collection of cards (about 64) that are unlocked (one by one) provided that 4 items are collected/gained in the game. Right now the deck of cards is modeled with a plist file, like this: <plist> <dict> <key>card1</key> <dict> <key>available</key> <true/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>1</integer> </dict> <key>card2</key> <dict> <key>available</key> <false/> <key>series</key> <integer>1</integer> <key>model</key> <integer>1</integer> <key>color</key> <integer>2</integer> </dict> The value for the key named "available" marks if the item is locked or not. Right now I have to include the data for the items collected by the user, but including that inside the plist looks like too much hassle. What kind of structure should be more suitable for this? How do you manage a collection of unlockable items like this?

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Why is there only one configuration management tool in the main repository?

    - by David
    How is it that Cfengine does not exist in the Ubuntu (10.04 LTS) Main Repository? I can't find a discussion of this anywhere (using Google). The only configuration management in Ubuntu Main seems to be Puppet. I looked for a wide variety of others as well - all from Wikipedia's list of configuration management tools - and none of them are present in Ubuntu main. I looked for bcfg2, opensymbolic, radmind, smartfrog, spacewalk, staf, synctool, chef - none are present. From my vantage point as a system administrator, I would have expected to find at least bcfg2, puppet, cfengine, and chef (as the most widely used tools). Why is cfengine (or chef and others) not included in Ubuntu main? Why is there only one configuration management tool in Ubuntu main? By the way - the reason this is important in the context of server administration is because Ubuntu main is fully supported by the Ubuntu team with updates and security updates; the other repositories are not.

    Read the article

  • Raid 5 mdadm Problem - Help Please

    - by user66260
    My Raid 5 array (4 1tb Disks WD10EARS) had was showing as degraded. I looked and one of the disks wasnt installed, so i re-added it with the mdadm add command. the array is now showing as (null)Array , but cant be mounted if i run: root@warren-P5K-E:/home/warren# sudo mdadm --misc --detail /dev/md0 I get: mdadm: cannot open /dev/md0: No such file or directory and running: root@warren-P5K-E:/home/warren# cat /proc/mdstat gives me: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: < none > The data is very important root@warren-P5K-E:/home/warren# mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b792 - correct Events : 1 Number Major Minor RaidDevice State this 1 8 0 1 spare /dev/sda 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7a0 - correct Events : 1 Number Major Minor RaidDevice State this 0 8 16 0 spare /dev/sdb 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# oot@warren-P5K-E:/home/warren# mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7b4 - correct Events : 1 Number Major Minor RaidDevice State this 2 8 32 2 spare /dev/sdc 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7c6 - correct Events : 1 Number Major Minor RaidDevice State this 3 8 48 3 spare /dev/sdd 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd That on the 4 drives.

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by DevDevDave
    I am looking for a good comparision of different available professional requirement management tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature. Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. Content consulted: similar question reqm tool with v-model ToolsJournal.com nice, but too old paper (pdf)

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • How can I ensure our project gets developed successfully, without having any project management experience? [migrated]

    - by Raven13
    I'm a web developer who is part of a three-man team that has been tasked with a rather large and complex development project. Other than some direction and impetus from management, we're pretty much on our own to develop the new website. None of us have any project management experience nor do my two coworkers seem like they would be interested in taking on that role, so I feel like it's up to me to implement some kind of structure to the development process in order to avoid issues down the road. What can I do as a developer without project management experience to ensure that our project gets developed successfully and avoid the pitfalls of developing a project without a plan?

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • BPM11g Launch - Spotlight on Innovation: A Unified Business Process Management Solution June 17th 2

    - by Jürgen Kress
    Spotlight on Innovation: A Unified Business Process Management Solution Thursday, June 17th, 2010 10 a.m. PT / 18:00 UK / 19:00 CET Presented by: Hasan Rizvi Senior Vice President Oracle Product Management Business Process Management (BPM) is essential for managing change and increasing business visibility, agility, and efficiency. To make the most of BPM, businesses today need to benefit from a new generation of process management solutions. Join Hasan Rizvi, Senior Vice President, Oracle Product Development, as he discusses Oracle’s innovations in the new BPM Suite 11g which will define the next generation of process management. Discover how you can leverage this complete, open, and integrated BPM solution that delivers: Management of all types of processes; including system, human, document, and decision-centric A simplified path to achieving greater business visibility, agility, and efficiency A unified process foundation that simplifies process management with a unified process engine and preintegration of process subsystems User-centric design that simplifies process modeling and interaction Social BPM interaction that provides social computing in the context of BPM to simplify and add richness to collaboration Register today for this live Webcast, another edition in a series introducing the next wave of products in Oracle Fusion Middleware 11g. Did you missed our BPM 11g webcast with Clemens Utschig-Utschig? The recorded version is now available! Here is your feedback: First experience with BPMN 2.0 in Oracle BPM Studio 11g by Hajo Normann Warum Oracle BPM Studio 11g? by Torsten Winterberg Oracle BPM 11g, less is more by Léon Smiers Oracle BPM 11g Integration with ADF and WebCenter Suite by Andrejus Baranovskis Oracle BPM11g available! by Guido Schmutz Listen to more feedback here. If you are working on BPM 11g projects and you would like to attend a hands-on training session, please contact Jürgen Kress. Technorati Tags: BPM,BPMN2.0,SOA,Hasan Rizvi,SOA Partner Community

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Five Ideas: Project Management

    - by Sylvie MacKenzie, PMP
     Except from Profit Magazine “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies.” —Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, on helping AmeriCares deliver aid to Haiti “Primavera P6 Analytics generates information that can help organizations improve their utilization and trim down overall operating costs. But more importantly, it gives organizations improved visibility.” —Yasser Mahmud, vice president of product strategy and industry marketing in Oracle’s Primavera Global Business Unit “Organizations are constantly looking for ways to improve the speed and precision of their decisions and work without creating environments and systems that limit their personnel through rigid structures and inflexible processes. The latest release of Primavera Portfolio Management meets this demand by further streamlining processes and supporting enhanced decision-making, helping drive better value from portfolios. In addition, the new UI clearly demonstrates Oracle's commitment to providing a seamlessly integrated enterprise project portfolio management product suite.” —Mike Sicilia, senior vice president, Oracle's Primavera “Make it a business project, not an IT project. All levels of functional management must have ownership, responsibility, and accountability for the success of the implementation.” —from Eaton Operations Services Manager Marcos Baccetto's 9 Project Management Tips “AEC firms must strategically pursue standardization opportunities in the project management area while preserving the spirit of entrepreneurism and flexibility at an individual project manager level. An enterprise technology platform doesn't only help with standardization of key project management processes across the enterprise; it also improves performance management, team collaboration and client specific reporting at an individual project level.” —Maneesh Chhabra is a director of Industry Strategy and Insight at Oracle

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >