Search Results

Search found 20270 results on 811 pages for 'package management'.

Page 21/811 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • SLK opens SCORM package as a ZIP file

    - by Cherie Riesberg
    Symptom: After installing SharePoint Learning Kit successfully, (http://www.codeplex.com/SLK), everything works except that the SCORM package (a ZIP extension) is opening as a ZIP file instead of a course. You get the normal ZIP message "Do you want to open or save this file?" Problem: The package is zipped at the upper folder level and does not create a manifest that allows SharePoint to recognize it as a SCORM file instead of a ZIP file. Solution: Add the contents of the course to the ZIP, not the outer (uppermost) folder.  This creates a ZIP file that SharePoint can recognize as a SCORM package.

    Read the article

  • HTG Explains: How Software Installation & Package Managers Work On Linux

    - by Chris Hoffman
    Installing software on Linux involves package managers and software repositories, not downloading and running .exe files from websites like on Windows. If you’re new to Linux, this can seem like a dramatic culture shift. While you can compile and install everything yourself on Linux, package managers are designed to do all the work for you. Using a package manager makes installing and updating software easier than on Windows. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How do I install Nautilus-Elementary?

    - by Srinivas G
    I've added the am-monkeyd PPA and upgraded my system. Yet, there's no sign of elementary in my fresh Maverick RC install. Have I done anything wrong? The PPA upgrades the default nautilus package and there is no separate "nautilus-elementary" package as of now. Now, there are three versions listed in the package properties: 1:2.32.0-0ubuntu1-ppa1 (maverick); 1:2.32.0-0ubuntu5~ppa5 (maverick); 1:2.32.0-0ubuntu1 (maverick); Anything you can make out from this?

    Read the article

  • Create a package for official Realtek ALC665 drivers (Dell XPS 15 L502X)

    - by Nic
    is it possible for someone to create an ALSA driver package from the official Realtek "LinuxPkg_5.17Beta.tar.bz2" drivers (found via Google)? These drivers provide excellent support for the ALC665 chipset, found eg. in the Dell XPS 15 notebook series (L502x). All the features like output selection (HDMI, headphones) that were not working before are supported. I am asking for a package because the driver is unusable as-is: it comes with an outdated version of ALSA that does not compile on a 3.5 kernel. Apart from that, it also removes all the default snd-* drivers that come with the kernel package. Any help in bringing better support for this device to the official Ubuntu packages is much appreciated. N.

    Read the article

  • Packages are not available under 13.04?

    - by Sven
    I have a small problem with installing packages under Ubuntu 13.04. Yesterday I wanted to install "pdfshuffler" (https://apps.ubuntu.com/cat/applications/raring/pdfshuffler/). If I go to the Ubuntu Software Center and look for "pdfshuffler" there is one item in the list (the package I want to install). When I click once on this item, the install button does not appear. If I then on furter information a error message appears telling me that there is no package called pdfshuffler in my package sources. I tried to install other packages like Eclipse or Supertux but nothing works. Why can't Ubuntu find these packages? Best regards, Sven...

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

  • Building lirc package from source with patches

    - by joystick
    I'd like to build latest lirc package for 12.04 with two patches from http://bit.ly/17779VW to make USB Infrared toy v2 work: Running sudo apt-build source lirc gave me ? build ll total 960 drwxr-xr-x 10 root root 4096 Nov 5 07:07 lirc-0.9.0 -rw-r--r-- 1 root root 113909 May 5 2011 lirc_0.9.0-0ubuntu1.debian.tar.gz -rw-r--r-- 1 root root 1553 May 5 2011 lirc_0.9.0-0ubuntu1.dsc -rw-r--r-- 1 root root 857286 May 5 2011 lirc_0.9.0.orig.tar.bz2 in /var/cache/apt-build/build. Running sudo apt-build build-source lirc then gave me Some error occured building package which is not really informative. I have successfully built patched lirc from source but now I would like to get a deb package. Where can I look for this 'some errors' in detail? Thank you, Alexei

    Read the article

  • Ubuntu gone out of order - Wont install any packages. What do I do?

    - by Aborted
    Lately, I've been getting some strange behaviour from Ubuntu. First and the most important is that it wont install updates. It gives a package installation error and it simply wont work. Earlier I tried to install TeamViewer via the Software Center, but got the same package error. I also feel like the connection speed is going slower than it should - don't know if this one is relevant to this case. What's wrong with my installation? How do I fix these package installation errors?

    Read the article

  • Whenever I try to remove a Debian package I receive an Error

    - by Brenton Horne
    Whenever I type into the terminal the command: sudo dpkg -r '/home/brentonhorne/Downloads/virtualbox.deb' I receive the error: dpkg: error: --remove needs a valid package name but '/home/brentonhorne/Downloads/virtualbox.deb' is not: illegal package name in specifier '/home/brentonhorne/Downloads/virtualbox.deb': must start with an alphanumeric character Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! How do I get around this problem?

    Read the article

  • How to package a file into .deb?

    - by Fluffy
    I'm trying to a make a simple .deb package, which would basically edit a config of another package I listed as a dependency. I added the required manipulations to the postinstall file. The problem is I can't find a way to package an example config, which should be copied and edited from the postinstall script. At the moment I just have a folder with the sample config, of which I'm creating a tar.gz and orig.tar.gz, then dh_make in that folder, edit the generated files and run debuild. However if I open the resulting .deb file with an archive manager, I can see that the sample file was not included at all.

    Read the article

  • can't install software--can I fix missing dpkg?

    - by user125272
    New software can't be installed, because there is a problem with the software currently installed. Do you want to repair now? hit Repair Package operation failed The installation or removal of a software package failed. Details => installArchives()failed:Could not exec dpkg! Error in function (synaptic:12725): GLib-CRITICAL **: g_child_watch_add_full: assertion 'pid > 0' failed Could not exec dpkg! E: Sub-process /usr/bin/dpkg returned an error code (100) A package failed to install. Trying to recover: sh: 1: dpkg: not found

    Read the article

  • Error installing Package Control for Sublime Text 3 on Ubuntu 14.04

    - by user1837378
    This is the error. It comes up when I paste and enter the installation code (which I get from the Package Control website) and each time I open up Sublime Text. Package Control Your system's locale is set to a value that can not handle non-ASCII characters. Package Control can not properly work unless this is fixed. On Linux, please reference your distribution's docs for information on properly setting the LANG environmental variable. As a temporary work-around, you can launch Sublime Text from the terminal with: LANG=en_US.UTF-8 sublime_text I had the same problem with Ubuntu 13.04 so it's probably not version-dependent.

    Read the article

  • Launchpad: Missing build dependencies even though dependency should be contained in uploaded package

    - by Chris
    I want to backport gcc-4.7 from raring to precise. So I ran backportpackage and uploaded gcc-4.7 to my PPA. However, when Launchpad tries to build it it complains about a missing dependency: Dependency wait on rhenium (virtual64) Missing build dependencies: libx32gcc1 Started on 2013-10-24 Finished on 2013-10-24 (took 2 minutes, 46.6 seconds) From looking at the package info for gcc-4.7 it seems that this should also be contained in the gcc-4.7 package that has been backported. What do I need to do to make Launchpad find this and build my package?

    Read the article

  • dpkg returns error 10 when processing package php5-common

    - by Jesse
    I'm having trouble installing php in 13.04. Seems like the package manager can't (re)configure the php package. I already tried purging every php* package, removing the cache files in /var/cache/aptbut and other solutions I've found but nothing seems to work. Here's the error output: $ sudo dpkg --configure -a Setting up php5-common (5.4.9-4ubuntu2.3) ... dpkg: error processing php5-common (--configure): subprocess installed post-installation script returned error exit status 10 Errors were encountered while processing: php5-common sudo apt-get install -f sudo apt-get install php5-common php5 they all return the same error. How can I fix this?

    Read the article

  • Should package structure closely resemble class hierarchy?

    - by Panzercrisis
    Pretty simple question. Should package structure closely resemble class hierarchy? If so, how closely? Why or why not? For instance, let's say you've got class A and class B, plus class AFactory and class BFactory. You put class A and class B in the package com.something.elements, and you put AFactory and BFactory in com.something.elements.factories. AFactory and BFactory would be further down the hierarchy package-wise, but they'd be further up class-wise. Is this sort of thing a good idea or a bad idea?

    Read the article

  • How do I download a corrupted package again?

    - by user64720
    Ubuntu 12.04 can't install Firefox 13 update, because the package is corrupted. While attempting to install, returns this error (I translated it from my language to English). /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). I can tell that the package at /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb is corrupted, but even as admin, I can't delete it in order to be downloaded again. How should I proceed? EDIT: There was a single package causing this conflict, please report here to understand all the situation: Why can't I install from software center?

    Read the article

  • cant install fastcgi ubuntu server: Package libapache2-mod-fastcgi is not available

    - by BlueDragon
    When i try to install fastcgi in ubuntu server 12.04 I get the following error: sudo apt-get install libapache2-mod-fastcgi Reading package lists... Done Building dependency tree Reading state information... Done Package libapache2-mod-fastcgi is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libapache2-mod-fastcgi' has no installation candidate Any solution?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Change Management Software

    - by Andrew
    I manage an 80,000 user CIS application written in Uniface. Every form in the application, and many of its processes, are represented by .frm files. We have hundreds of these files and 5 instances of the application. Instances include multiple production installations which must be kept sync'd. We do not get MD5 from our vendor for files that are released to us as patches. We have been using a spreadsheet to track changes, but this is far from ideal. Is there a commercial application that can be purchased that will allow us to track changes to the instances? Thank you all! EDIT: Patches are released as zip files with either FRM files in them or SQL files or a mix of both. SQL files will contain statements that need to be run in Oracle. Patches are also assigned unique patch numbers.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >