Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 31/1229 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Does the OSS Backup Solution amanda.org support sparse files?

    - by user97961
    I want to (or better have to) do Backups of my KVM Virtual Machine images. I have searched for days for a good Backup Soloution. I know amanda is a very good solution. It would be kinf if someone kenn tell me if the following is supported: Trigger the Creation of LVM Snapshot (by invoking a Shell Script that I will write for that purpose) Do a Differential/Delta Backup on my KVM LVM qcow2 sparse file. = I only want to copy the actually changed bits/bytes (=Delta Backup). And it has to support that the file to be backuped up is a sparse file. (Rsync seems to have some kind of problems in regard to this (if the file does not exist yet on the other side... Then it will create a full file, not a sparse file)) Release the LVM Snapshot (By invoking a Script that I will write for that purpose) It's strange, I have nowhere found any documentation about this fact when searching the internet. Zmanda (Commercial Edition) has support vom XEN VM Backup (but not for KVM as far as I can tell)...

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • BonitaSoft sort Bonita Open Solution 5.6 et propose de tester gratuitement sa solution de gestion des processus métier open-source

    BonitaSoft sort Bonita Open Solution 5.6 Et propose de tester gratuitement sa solution de gestion des processus métier open-source BonitaSoft, un des leaders de la gestion des processus métier (BPM) open source, a annoncé la sortie de Bonita Open Solution 5.6. Cette nouvelle version intègre des mises à niveau importantes proposées au sein d'une nouvelle gamme de solutions BPM pour « maximiser la productivité, accélérer la mise en production d'applications basées sur des processus métier, et de sécuriser les déploiements critiques ». La suite Bonita Open Solution est conçue pour répondre aux besoins évolutifs de projets BPM qui requièrent davantage de collaboration entre les utilisa...

    Read the article

  • Problem Solving vs. Solution Finding

    - by ryanabr
    By enlarge, most developers fall into these two camps I will try to explain what I mean by way of example. A manager gives the developer a task that is communicated like this: “Figure out why control A is not loading on this form”. Now, right there it could be argued that the manager should probably have given better direction and said something more like: “Control A is not loading on the Form, fix it”. They might sound like the same thing to most people, but the first statement will have the developer problem solving the reason why it is failing. The second statement should have the developer looking for the solution to make it work, not focus on why it is broken. In the end, they might be the same thing, but I usually see the first approach take way longer than the second approach. The Problem Solver: The problem solver’s approach to fixing something that is broken is likely to take the error or behavior that is being observed and start to research it using a tool like Google, or any other search engine. 7/10 times this will yield results for the most common of issues. The challenge is in the other 30% of issues that will take the problem solver down the rabbit hole and cause them not to surface for days on end while every avenue is explored for the cause of the problem. In the end, they will probably find the cause of the issue and resolve it, but the cost can be days, or weeks of work. The Solution Finder: The solution finder’s approach to a problem will begin the same way the Problem Solver’s approach will. The difference comes in the more difficult cases. Rather than stick to the pure “This has to work so I am going to work with it until it does” approach, the Solution Finder will look for other ways to get the requirements satisfied that may or may not be using the original approach. For example. there are two area of an application of externally equivalent features, meaning that from a user’s perspective, the behavior is the same. So, say that for whatever reason, area A is now not working, but area B is working. The Problem Solver will dig in to see why area A is broken, where the Solution Finder will investigate to see what is the difference between the two areas and solve the problem by potentially working around it. The other notable difference between the two types of developers described is what point they reach before they re-emerge from their task. The problem solver will likely emerge with a triumphant “I have found the problem” where as the Solution Finder will emerge with the more useful “I have the solution”. Conclusion At the end of the day, users are what drives features in software development. With out users there is no need for software. In todays world of software development with so many tools to use, and generally tight schedules I believe that a work around to a problem that takes 8 hours vs. the more pure solution to the problem that takes 40 hours is a more fruitful approach.

    Read the article

  • Windows Server Backup 2012 error - "The version does not support this version of the file format."

    - by Cylindric
    I'm trying out Windows Server 2012, specifically Windows Server Backup, as it would be a very useful way to see us through until we can upgrade our 'real' backup system. I'm backing up to a network share, and a Server 2008 WSB works fine. On the 2012 server however, I get an error: Backup of volume \?\Volume{298d1a7d-f80f-11e1-93e8-806e6f6e6963}\ has failed. The version does not support this version of the file format. It's a VHDX written by WSB, so I'm not sure what version it's complaining about. I can see a whole bunch of files in the destination, so I don't think it's a simple authentication issue, but I only get about 8Mb of VHDX written.

    Read the article

  • How can ShadowProtect SBS backup to alternating external drives?

    - by detly
    I am trying to configure ShadowProtect SBS (v. 4.1.5.10129) in Windows Server 2003 SBS to backup my server hard drives to two alternating external drives. What I want is to be able to swap one drive for another every Friday, and have ShadowProtect continue on the same schedule. Ideally, this would require absolutely no user interaction whatsoever, apart from physically unplugging one drive and reconnecting the other. The trouble is, Windows Server 2003 does not allow you to assign the same drive letter to two different devices. So if I plug in drive #1 and assign it drive letter "X:", the next week when I unplug it and plug in drive #2, it gets some other letter. But since ShadowProtect is set to backup to "X:\", it can't find it and the backup fails. The drives are Samsung STORY Station 3.0 2TB drives. How can I configure things so I can just swap the drives over every week and not worry about having to reconfigure drive letters every time?

    Read the article

  • Concurrent modification during backup: rsync vs dump vs tar vs ?

    - by pehrs
    I have a Linux log server where multiple applications write data. Data is written in bursts, and in a lot of different files. I need to make a backup of this mess, preferably preserving as much coherence between the file versions as possible and avoiding getting truncated files. Total amount of data on the server is about 100Gb. What I really would want (but can't) is to shut-down, backup the system cold and then start it up again. What kind of guarantees against concurrent modification does the various backup tools give? When do they "freeze" the file versions? I am looking at rsync, dump and tar at the moment, but I am open for other (open source) alternatives. Changing the application or blocking writing for backups is sadly not an option. System is not running LVM (yet), but I have considered that for rebuilding the system and then snapshots.

    Read the article

  • How to reuse backup on Time Machine on Snow Leopard after a logic board change, after choosing wrong

    - by kmiffy
    After my logic board was replaced, I connected my laptop back to my network, and Time Machine gave me a popup, as shown on this thread: http://superuser.com/questions/78068/recycle-time-machine-for-new-machine/78264#78264 I misread the question and clicked on "Create New Backup" when I should have clicked on "Reuse Backup" to connect to my old backup file. How can I trigger that popup again? Turning Time Machine on and off does not work, and the instructions on forums to fix via terminal doesn't work because snow leopard is missing the fsaclctl command (and I'm also not familiar with terminal commands.) Thanks.

    Read the article

  • Why can't I create a Windows backup on my secondary disk?

    - by Brian Sullivan
    I've installed Windows 7 Ultimate on an SSD that I've added to the XPS desktop that I bought from Dell. I would like to use the built-in backup functionality to create incremental backups and store them on the large drive that came with the machine. I formatted the large drive and turned it into a Basic disk. However, when I try to set the backup location to the large internal disk (E:\) in the "Set up backup" wizard, a get a message saying, "A system image cannot be saved on a drive that your computer boots from or that Windows is installed on." Windows is not installed on that disk. I even deleted the OEM partition that was on the disk, and removed it completely from the boot order in the BIOS. Any clue why Windows is griping at me about this?

    Read the article

  • How do I backup (from Vista Home Premium) to a FAT32 HDD connected to a wireless router with no user account or password set?

    - by Scubadooper
    I have a Seagate expansion 2TB HDD which I've managed to get working on my wireless router (thanks to fat32format) however I haven't managed to set my backup to work with it. Vist Home Premium requires that I input a username and password in the backup utility. As far as I'm aware that type of access control isn't set on the HDD: Can I set up the access control on the HDD? If so how Or, can I set the backup to work without it? If so how Thanks for your help, I haven't been able to find the answer anywhere on the net so far

    Read the article

  • Saving backup files automatically in (g)Vim after saving a file.

    - by Somebody still uses you MS-DOS
    I had a problem with my gVim. I lost some important modifications after I plugged on my machine after a hibernating process. To avoid this kind of problem, I would like to know if it's possible to add something in my .vimrc (or a plugin) that automatically backups all saving made to my files. Disk space is not an issue, I can delete these files after. I'm already using set backup set backupdir=~/.backup/vim set directory=~/.swap/vim This creates a myfile.extension~ in my .backup/vim. ...but I would like this configuration to add ~ to first save, ~0 to second, ~1 to third, ~2 to fourth, and so on - something that keeps copies from all modifications I made to a file. Is this possible? Do you know if there's a plugin for this?

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • r1soft agent is failing with the error: "write error while sending code: Broken pipe"

    - by curiousguy
    I have an Ubuntu 10.04.4 LTS server with r1soft agent installed in it. Recently, the backups are failing with the following error. -------- write error while sending code: Broken pipe -------- I have reinstalled the buagent but to no avail. On checking the server logs, I could see the following errors listed in it: -------- # tail -f /var/log/messages |grep -i buagent Nov 17 03:35:06 microscope buagent: Need to back up 126 sectors Nov 17 03:35:06 microscope buagent: (Righteous Backup Linux Agent) 1.79.0 build 12433 Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.195) with valid RSA key Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.201) with valid RSA key Nov 17 03:35:06 microscope buagent: sending auth challenge for allowed host at (10.128.136.201) port (47890) Nov 17 03:35:06 microscope buagent: host (10.128.136.201) port (47890) authentication successful Nov 17 03:35:06 microscope buagent: Backup request accepted. Starting backup. Nov 17 03:35:06 microscope buagent: Snapshot completed in 0.010 seconds. Nov 17 03:45:03 microscope buagent: Error reading blocks from snapshot. Nov 17 03:45:03 microscope buagent: Reading blocks failed Nov 17 03:45:03 microscope buagent: error backup aborted Nov 17 03:45:03 microscope buagent: backup failed on agent closing connection Nov 17 03:45:03 microscope buagent: Backup failed. Nov 17 03:45:03 microscope buagent: write error while sending code: Broken pipe (32) Nov 17 03:45:03 microscope buagent: tell child write failed -------- I tried changing the 'Timeout' and 'DiskAsPartition' value in '/etc/buagent/agent_config' file but no luck. Also, verified that proper route is added to the backup server. The agent is also running fine. Am I missing anything? Any help would be much appreciated. Note: CDP 2.0 is installed in the backup server.

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • DNASTREAM’s RapidLaunch Oracle Accelerate solution for RightNow

    - by Richard Lefebvre
    The Oracle RightNow Accelerate solution from DNASTREAM allows each Customer to enjoy quicker deployment and earlier time to benefits from this SAAS Customer Experience solution. At the start of the project, a full suite of E-Learning simulations & materials is provided by DNASTREAM to match the customer’s processes. This RapidLaunch content library for RightNow can be leveraged by our customers early in their project implementations bringing significant cost efficiencies, time reduction and improved user adoption to their project roll outs. Solution Profile: This Oracle Accelerate solution is based on Oracle RightNow CX that includes Content management, Contact management, Incident management, Customer Portal, Closed incident Survey, Standard reports. As an additional option there is available the Oracle RightNow CX Chat implementation. For more information about RightNow and the DNASTREAM Accelerate solution, visit the Oracle Accelerate microsite or contact www.dnastream.com

    Read the article

  • Emailing Interviewer after interview regarding technical solution

    - by Raghav Shankar
    I had an interview yesterday where I was given a programming problem and I was asked to figure the optimal solution for it. I gave a solution that worked in linear time, but used 2 loops (not inner loops). At the end of the interview, the interviewer saw I was interested in solving the problem, so he said the optimal solution uses only one loop and has linear complexity and at the end of the interview I had asked for his card and he gave one to me. I think I might have figured out a solution and I was wondering if it's alright to email the recruiter thanking him for his time and also mentioning about the solution I had figured out?

    Read the article

  • Looking for Non Hosted Audio & Video Podcasting Solution for Church Websites

    - by motboys
    I am looking for a solution that will do the following: User uploads audio and/or video files with title, desc. image etc Solution embeds info into ID3 tags Solution generates RSS feed Solution embeds new content in our website Content on website is searchable This is for a couple of church websites I manage. I am looking for the ability to do the above with a sermon mp3 and also a video. At the moment we are doing it with multiple steps / people involved and I want to automate the process. I can't seem to find a solution that does all of the above. Thank you!

    Read the article

  • How do I image my hard drive for backup? Or do I just need to backup the files?

    - by NoCatharsis
    I have never imaged a hard drive, so I don't know what software to use or how to prepare my system for imaging. Is this the best way to backup? In the past, I've always just kept a copy of my important files on an external drive and in Gmail or DropBox for smaller stuff, but it would be nice to just take one image and restore from that if something ever goes wrong. Thanks for the help. EDIT: I'm sorry, I forgot to mention the OS. I would like to do this for my home and work computers, which are Vista and XP respectively. And actually I'm about to upgrade to Windows 7 at home, so details on that would be appreciated too.

    Read the article

  • C# Solution - How many projects?

    - by Oskar Kjellin
    Hey, I googled this a little but couldn't find a good result. Right now I'm building a web site and I'm trying to make it as correct as possible from a design point of view from the beginning. The problem I'm now facing is that when deciding to start with logging I needed a project to place this code in. As I could not find a suitable place in my currect projects I thought: hey, why not a logging class library? Is there a general guideline on how many projects you should have? I know this would be a rather small project but it would be nice to entirely get it out of my way! Any hints are appreciated :)

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >