Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 65/394 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Which cloud service is best?? can you order it??

    - by mathew
    Hi there are many cloud services out there but famous are mentioned below...can any one tell me which one is first,second,third etc....in terms of money,performance and reliability.. 1) Amazon 2) Rackspace 3) Softlayer 4) Vps.net I dont know other actually I am only looking for Linux services .... Mathew

    Read the article

  • Creating a Linux cluster/cloud to act as a server.

    - by Kavon Farvardin
    I have about four or five machines in the Pentium 3-4 era and I'm interested in creating a Linux server comprised of these machines. The server's main purposes would be to host several low-medium traffic websites/services (voice and game), and share terabytes of data on a local network. I could probably throw together one modern computer as a server and call it a day, but I'm interested in using these machines to do it instead. Where would I get started in this cluster/cloud setup?

    Read the article

  • How to run your own cloud network at home?

    - by Xeoncross
    I have 7 PC's around the house that total more than 10Ghz in CPU power. I was wondering if I could put them to use building my own "cloud computing" network to help with rendering videos, blender animations, Photoshop effects, or anything else. Does something like this exists - and if so what is it called and where do I find it?

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Oracle’s New Approach to Cloud-based Applications User Experiences

    - by Oracle OpenWorld Blog Team
    By Misha Vaughan It was an exciting Oracle OpenWorld this year for customers and partners, as they got to see what their input into the Oracle user experience research and development process has produced for cloud-delivered applications. The result of all this engagement and listening is a focus on simplicity, mobility, and extensibility. These were the core themes across Oracle OpenWorld sessions, executive roundtables, and analyst briefings given by Jeremy Ashley, Oracle's vice president of user experience. The highlight of every meeting with a customer featured the new simplified UI for Oracle’s cloud applications.    Attendees at some sessions and events also saw a vision of what is coming next in the Oracle user experience, and they gave direct feedback on whether this would help solve their business problems.  What did attendees think of what they saw this year? Rebecca Wettemann of Nucleus Research was part of  an analyst briefing on next-generation user experiences from Oracle. Here’s what she told CRM Buyer in an interview just after the event:  “Many of the improvements are incremental, which is not surprising, as Oracle regularly updates its application,” Rebecca Wettemann, vice president of Nucleus Research, told CRM Buyer. "Still, there are distinct themes to this latest set of changes. One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually—the intuitive nature of the application and the design work they have done with this goal in mind. The software uses as few buttons and fields as possible," she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."  What else did we hear? Oracle OpenWorld is a time when we can take a broader pulse of our customers’ and partners’ concerns. This year we heard some common user experience themes on the following: · A desire to continue to simplify widely used self-service tasks · A need to understand how customers or partners could take some of the UX lessons learned on simplicity and mobility into their own custom areas and projects  · The continuing challenge of needing to support bring-your-own-device and corporate-provided mobile devices to end users · A desire to harmonize user experiences across platforms for specific business-use cases  What does this mean for next year? Well, there were a lot of things we could only show to smaller groups of customers in our Oracle OpenWorld usability labs and HQ lab tours, to partners at our Expo, and to analysts under non-disclosure agreements. But we used these events as a way to get some early feedback about where we are focusing for the year ahead. Attendees gave us a positive response: @bkhan Saw some excellent UX innovations at the expo “@usableapps: Great job @mishavaughan and @vinoskey on #oow13 UX partner expo!” @WarnerTim @usableapps @mishavaughan @vinoskey @ultan Thanks for an interesting afternoon definitely liked the UX tool kits for partners. You can expect Oracle to continue pushing themes of simplicity, mobility, and extensibility even more aggressively in the next year.  If you are interested to find out what really goes on in the UX labs, such as what we are doing with smartphones, tablets, heads-up displays, and the AppsLab robots, feel free to reach out to me for more information: Misha Vaughan or on Twitter: @mishavaughan.

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Force CloudFront distribution/file update

    - by Martin
    I'm using Amazon's CloudFront to serve static files of my web apps. Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed? Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Progress 4GL and DB to Oracle and cloud

    - by llaszews
    Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy. The least risky and expensive option (in the short term) is to use the Progress OpenEdge DataServer for Oracle: Progress OpenEdge DataServer This eliminates the need to have to migrate the Progress 4GL to Java/J2EE. The database can be migrated using SQLWays Ispirer: Ispirer SQLWays ProgressDB migrations tool The Progress 4GL can remain as is. In order to get the application on the cloud there are a few approaches: 1. VDI - Virtual Desktop is a way to put all of the users desktop in a centralized environment off the desktop. This is great in cases where it is just not one client/server application that the user needs access too. In many cases, users will utilize MS Access, MS Excel, Crystal Reports and other tools to get at the Progress DB and other centralized databases. Vmware's acquistion of Wanova shows how VDI is growing in usage. Citrix is the 800 pound gorilla in the VDI space with Citrix WinFrame (now called XenDesktop). Oracle offers a VDI solution that Oracle picked up when it acquired Sun. 2. Hypervisor Server Virtualization - Of course you can place applications written in client/server languages like Progress 4GL buy using server virtualization from Oracle, VMWare, Microsoft, Citrix and others. 3. Microsoft Remote Desktop Services (aka: Terminal Services Client) The entire idea is to eliminate all the client/server desktop devices and connections which require desktop software and database drivers. A solution to removing database drivers from the desktop is to use DataDirect SQLLink

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >