Search Results

Search found 11313 results on 453 pages for 'calavera info'.

Page 9/453 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Oracle Enterprise Manager Ops Center 12c is now available for download at Oracle technology Network

    - by Anand Akela
    Oracle Enterprise Manager Ops Center 12c is available now for download at Oracle Technology Network (OTN ) . Oracle Enterprise Manager Ops Center web page at Oracle Technology Network Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • CIC 2010 - Ghost Stories and Model Based Design

    - by warren.baird
    I was lucky enough to attend the collaboration and interoperability congress recently. The location was very beautiful and interesting, it was held in the mountains about two hours outside Denver, at the Stanley hotel, famous both for inspiring Steven King's novel "The Shining" and for attracting a lot of attention from the "Ghost Hunters" TV show. My visit was prosaic - I didn't get to experience the ghosts the locals promised - but interesting, with some very informative sessions. I noticed one main theme - a lot of people were talking about Model Based Design (MBD), which is moving design and manufacturing away from 2d drawings and towards 3d models. 2d has some pretty deep roots in industrial manufacturing and there have been a lot of challenges encountered in making the leap to 3d. One of the challenges discussed in several sessions was how to get model information out to the non-engineers in the company, which is a topic near and dear to my heart. In the 2D space, people without access to CAD software (for example, people assembling a product on the shop floor) can be given printouts of the design - it's not particularly efficient, and it definitely isn't very green, but it tends to work. There's no direct equivalent in the 3D space. One of the ways that AutoVue is used in industrial manufacturing is to provide non-CAD users with an easy to use, interactive 3D view of their products - in some cases it's directly used by people on the shop floor, but in cases where paper is really ingrained in the process, AutoVue can be used by a technical publications person to create illustrative 2D views that can be printed that show all of the details necessary to complete the work. Are you making the move to model based design? Is AutoVue helping you with your challenges? Let us know in the comments below.

    Read the article

  • Cutting Paper through Visualization and Collaboration

    - by [email protected]
    It's hard not to hear about "Going Green" these days. Many are working to be more environmentally conscious in their personal lives, and this has extended to the corporate world as well. I know I'm always looking for new ways. Environmental responsibility is important at Oracle too, and we have an entire section of our website dedicated to our solutions around the Eco-Enterprise. You can check it out here: http://www.oracle.com/green/index.html Perhaps the biggest and most obvious challenge in the world of business is the fact that we use so much paper. There are many good reasons why we print today too. For example: Printing off a document, spreadsheet, or CAD design that will be reviewed and marked up while on a plane Having a printout of a facility when a field engineer performs on-site maintenance During a multi-party design review where a number of people will review a drawing in a meeting room, scribbling onto a large scale drawing print to provide their collaborative comments These are just a few potential use cases, and they're valid ones. However, there's a way in which you can turn these paper processes into digital ones. AutoVue allows you to view, mark-up, and collaborate on all the data you would print. Indeed, this is the core of what AutoVue does. So if we take the examples above, we could address each as follows: Allow you to view the document, spreadsheet, or CAD drawing in AutoVue on your laptop. Even if you originally had this data vaulted in some time of system of record (like an ECM solution) and view your data from there, AutoVue allows you to "Work Offline" and take the documents you need to review on your laptop. From there, the many annotation tools in AutoVue will give you what you need to comment upon the documents that you are reviewing. The challenge with the mobile workforce is always access to information. People who perform maintenance and repair operations often are in locations with little to no Internet connectivity. However, technology is coming to these people in the form of laptops, tablet PCs, and other portable devices too. AutoVue can address situations with limited bandwidth through our streaming technology for viewing, meaning that the most up to date information can be pulled up from the central server - without the need for large data transfer. When there is no connectivity at all, the "Work Offline" option will handle this. For a design review session, the Real-Time Collaboration capabilities of AutoVue will let all the participants view the same document in a synchronized view, allowing each person to be able to mark-up the document at the same time. Since this is done in a web-based manner, not only is it not necessary to print the document, but you benefit by reducing the travel needed for these sessions. Not only are trees spared, but jet fuel as well. There are many steps involved with "Going Green", but each step is a necessary one. What we do today will directly influence our future generations, and we're looking to help.

    Read the article

  • Catch Up on Your Reading

    - by [email protected]
    AutoVue 20.0 was a major release which included many new features and enhancements. We eagerly shared the news with members of the media, who in turn wrote about AutoVue enterprise visualization in various online articles. Here is a summary of the articles featuring AutoVue 20.0. Happy reading! Oracle Unveils AutoVue 20.0 Desktop Engineering; April 5, 2010 Oracle Upgrades Document Visualization Tool Managing Automation; April 5, 2010 Oracle's AutoVue 20.0 Enhances Visual Document Collaboration CMS Wire; April 6, 2010 Oracle Turns Attention to Project and Document Management Channel Insider; April 7, 2010 Oracle Unveils AutoVue 20.0 Database Trends and Applications; April 7, 2010

    Read the article

  • OEPE with ADF binding support available: Total Eclipse

    - by Frank Nimphius
    The current release of Oracle Enterprise Pack for Eclipse, though in technology preview, brings Oracle ADF binding to the Eclipse IDE. You can download the Software from the link below: Oracle Enterprise Pack for Eclipse (12.1.1.1.0) Technical Preview New June 2012 Certified on Windows 7/XP/Vista, MacOS, and Linux. Supported on JDK 6. For many Eclipse users, ADF is new and therefore I expect them to need guidance and help in case they run into issues they don't know how to recover from. Similar, ADF users familiar with Oracle JDeveloper that want to give OEPE a try, will find things different in Eclipse and thus may have questions.  For both audiences I suggest to post issues to the OEPE forum on the Oracle Technology Network: I'll extend my OTN monitoring to include the OEPE forum on a daily basis to learn about developer needs, requirements and - of course - to catch bugs that need to be filed. From my side this is a part-time involvement, which means that the more ADF questions show on the forum, the more help I could need in answering them. The OTN forum for JDeveloper in my opnion wouldn't be the right place to go to unless the question is a generic ADF question that is not dependent on the integration in Eclipse. Here's the OEPE forum link for a start https://forums.oracle.com/forums/forum.jspa?forumID=578 Frank

    Read the article

  • POP Culture

    - by [email protected]
    When we hear the word POP, we normally think of a soft drink, or a soda, while for others, it might be their favourite kind of music. In my case, it's the sound my knee makes when I bend down. Within Oracle though, when we talk about POP, we are referring to the Partner Ordering Portal. The Partner Ordering Portal, or POP as we like to call it, provides AutoVue Partners with a method to submit their orders online. POP offers Partners with up-to-date pricing and licensing information, efficient order processing, as most data is validated on screen, thereby reducing errors and enabling faster processing and, online order status and tracking. POP is not yet available in every country, but it is available in most. Click here to check out the POP home page (OPN Login information required) to see if your country of business is eligible to use POP and, for access to creating an account, watching instructional training viewlets, etc.

    Read the article

  • Oracle's Thirteen Engineered Systems

    - by Luis Moreno Campos
    You already need a catalogue to keep up with the many new stuff coming out from Oracle Engineered from factory.In the Exadata portfolio you have 4 systems:- Quarter Rack X2-2 Database Machine- Half-Rack X2-2 Database Machine- Full-Rack X2-2 Database Machine- X2-8 Database MachineBut if Exadata presents a stunning portfolio, Exalogic doesn't fall behind on that by putting out 6 versions: 3 sizes (Quarter, Half and Full) with x86 processors and the same 3 sizes with SPARC based processors.Finally we have 3 new systems called SPARC Superclusters where Solaris 11 was re-engineered to take more out of the power of Infiniband: "Available in the next calendar year, the Oracle SPARC Supercluster will be available in T3-2, T3-4 and M5000-based configurations".I see Oracle delivering on it's promise to tightly integrate Hardware and Software to work closer together.

    Read the article

  • Grab your popcorn and watch the latest AutoVue Movie. Now available on YouTube!

    - by [email protected]
    Just released is: Oracle's AutoVue Visualization Solutions and Primavera P6 integration Movie Watch it now (9:24). This is sure to be a box office sensation. And if you have time for a double, triple or even quadruple feature don't forget these other AutoVue movies available on YouTube: AutoVue Work Online and Offline movie Watch it now (5:01). AutoVue 3D Walkthrough movie Watch it now (6:01). AutoVue 2D Compare Movie Watch it now (4:14). Enjoy the movies.

    Read the article

  • Watch the AutoVue release 20.0 Webcast - April 27 at 12pm EST

    - by [email protected]
    Join our live Webcast on Tuesday, April 27th, 2010 to discover how AutoVue release 20.0 can help you to: • Improve technical and business decision-making with visual access to accurate, in context information • Increase operational efficiency by integrating and visually enabling existing enterprise systems • Drive innovation by enhancing enterprise-wide document collaboration capabilities • Mitigate project risk with a reliable audit trail of changes and approvals Click here to register for the Webcast

    Read the article

  • Improve Engineering & Construction Project Productivity

    - by [email protected]
    Driving successful project delivery and providing greater value and return for all stakeholders are key goals for firms in the engineering and construction industry. However, increasingly complex construction projects, compressed schedules, ineffective collaboration among project stakeholders, and limited interoperability, can get in the way of these goals and lead to reduced productivity. What E&C firms need are solutions that will improve global team collaboration, optimize processes and better communicate electronic project data. Check out the AutoVue for Engineering and Construction Solution Brief and learn how AutoVue enterprise visualization solutions can: - improve global project collaboration and communication - improve data interoperability - support virtual design and construction projects - improve change management and maintain accountability

    Read the article

  • Why partners should visit the new AutoVue Knowledge Zone

    - by [email protected]
    Learn more about AutoVue and connect with your peers to distinguish your offerings, seize opportunities, and to increase your sales! Explore the latest releases, integration solution offerings, marketing assets, partner enablement tools, events and latest partner initiatives by clicking through the tabs - Why Partner, Develop, Sell, and Connect. Knowledge Zones are designed to accelerate the partner's knowledge about Oracle solutions, as well as provide new opportunities to collaborate with the entire Oracle partner ecosystem. The AutoVue Knowledge Zone, launched in March 2010, is continuously being updated with the latest information to better equip and enable our partners to sell AutoVue solutions. Get all the information you always wanted to convert your sales opportunities into wins. Check out and bookmark the AutoVue Knowledge Zone now!

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available

    - by Frank Nimphius
    "The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too.These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when [...]  designing exciting, new, highly usable applications -- in the cloud or on-premise.  Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business.  What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let us know in the comments section or use the contact form provided."

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • Problem getting GOBI 2000 HS to work

    - by Zypher
    I've been trying to get my integrated GOBI WWAN card to work under 10.10 for a while now. I was able to get the network manager to see the card after installing the gobi-loader package. I was able to setup the connection, but i cannot establish a connection to Verizon. Below is the output from /var/log/daemon.log as i try to connect. Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Quiting due to inactivity Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Shutdown was requested Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) starting connection 'Verizon connection' Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 3 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 6 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 6 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> CDMA connection failed: (32) No service Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 9 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Marking connection 'Verizon connection' invalid. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> Activation (ttyUSB0) failed. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 9 -> 3 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): deactivating device (reason: 0). Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS.

    Read the article

  • Free Advanced ADF eCourse, part II

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A second part of the advanced Oracle ADF online training is available for free. Lynn Munsinger and her team worked hard to deliver this second part of the training as close to the release of the first installment as possible. This two part advanced ADF training provides you with a wealth of advanced ADF know-how and insight for you to adopt in a self-paced manner. Part 1 (though I am sure you have this bookmarked): http://tinyurl.com/advadf-part1 Part 2 (the new material): http://tinyurl.com/advadf-part2 Quoting Lynn: "This second installment provides you with all you need to know about regions, including best practices for implementing contextual events and other region communication patterns. It also covers the nitty-gritty details of building great looking user interfaces, such as how to work with (not against!) the ADF Faces layout components, how to build page templates and declarative components, and how to skin the application to your organization’s needs. It wraps up with an in-depth look at layout components, and a second helping of additional region considerations if you just can’t get enough. Like the first installment, the content for this course comes from Product Management. This 2nd eCourse compilation is a bit of a “Swan Song” for Patrice Daux, a long-time JDeveloper and ADF curriculum developer, who is retiring the end of this month. Thanks for your efforts, Patrice, and bon voyage!" Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Indeed: Great job Patrice! (and all others involved)

    Read the article

  • ADF Partner Community News Session - Open Invitation: "ADF as a basis of Fusion Apps - the biggest ADF project ever (in English)"

    - by Frank Nimphius
    After a successful guest performance of Ted Farrell in 2011, this year's international ADF speaker to speak during an ADF News session is Chris Muir from Oracle.  ADF News Session - Friday September 14, 8:30 AM - 9.00 AM (CET) - Topic: ADF as a basis of Fusion Apps - the biggest ADF project ever (in English) +++ this webcast will be conducted in English +++ dial-in numbers conc. ADF News Session, Sep. 14 2012 You are invited to join the next ADF News Session, that is going to take place September 14 2012 speaker:  Chris Muir / Oracle time:         8:30 AM (CET) duration:  30 minutes topic:        ADF as a basis of Fusion Apps - the biggest ADF project ever (in English) dial-in webconf: https://oraclemeetings.webex.com conf ID:      595 484 157 confkey:    123456 Please enter your name and an abbreviation of you company name when dialing in (please don´t use blanks and special characters). Please notice that this information will be visible to all participants of the webcast. Thank you. dial-in telco:           +49 (0)69 2222 16 106 or +49 (0)800 66 485 15           ConfCode: 208 503 9           SecurityPasscode: 112233  Other toll-free dial in numbers for EMEA countries are listed below (information is supplied without liability): Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Austria 0800005967 Belgium 080048331 Croatia 0800222323 Czech Republic 800701080 Denmark  80889099 Estonia 8000111325 Egypt 08000000213 Finland 0800112073 France 0805632866 Greece 00800127897 Hungary 0680011201 Iceland 8008779 Ireland 1800932479 Israel 1809452571 Italy 800897629 Latvia 80002397 Luxembourg 80026598 Netherlands 08000235028 Norway 80010796 Poland 8001213557 Portugal 800814990 Romania 0800895563 Russia 81080029351012 Saudi Arabia 8008444320 Slovak Republic 0800001586 Slovenia 080080466 South Africa 0800980961 Spain 800098600 Sweden 856619465 Switzerland 0800650026 Turkey 00800 44632129 Ukraine 0800500166 United Arab Emirates 8000440344 United Kingdom 08006948154  

    Read the article

  • Using JRE 1.5, still maven says annotation not supported in -source 1.3

    - by Abhijeet
    Hi, I am using JRE 1.5. Still when I try to compile my code it fails by saying to use JRE 1.5 instead of 1.3 C:\temp\SpringExamplemvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building SpringExample [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] Deleting directory C:\temp\SpringExample\target [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 6 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 6 source files to C:\temp\SpringExample\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.BuildFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:516) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Wed Dec 22 10:04:53 IST 2010 [INFO] Final Memory: 9M/16M [INFO] ------------------------------------------------------------------------ C:\temp\SpringExamplejavac -version javac 1.5.0_08 javac: no source files

    Read the article

  • Google Maps: remember id of marker with open info window

    - by AP257
    I have a Google map that is showing a number of markers. When the user moves the map, the markers are redrawn for the new boundaries, using the code below: GEvent.addListener(map, "moveend", function() { var newBounds = map.getBounds(); for(var i = 0; i < places_json.places.length ; i++) { // if marker is within the new bounds then do... var latlng = new GLatLng(places_json.places[i].lat, places_json.places[i].lon); var html = "blah"; var marker = createMarker(latlng, html); map.addOverlay(marker); } }); My question is simple. If the user has clicked on a marker so that it is showing an open info window, currently when the boundaries are redrawn the info window is closed, because the marker is added again from scratch. How can I prevent this? It is not ideal, because often the boundaries are redrawn when the user clicks on a marker and the map moves to display the info window - so the info window appears and then disappears again :) I guess there are a couple of possible ways: remember which marker has an open info window, and open it again when the markers are redrawn don't actually re-add the marker with an open info window, just leave it there However, both require the marker with an open window to have some kind of ID number, and I don't know that this is actually the case in the Google Maps API. Anyone? ----------UPDATE------------------ I've tried doing it by loading the markers into an initial array, as suggested. This loads OK, but the page crashes after the map is dragged. <script type="text/javascript" src="{{ MEDIA_URL }}js/markerclusterer.js"></script> <script type='text/javascript'> function createMarker(point,html, hideMarker) { //alert('createMarker'); var icon = new GIcon(G_DEFAULT_ICON); icon.image = "http://chart.apis.google.com/chart?cht=mm&chs=24x32&chco=FFFFFF,008CFF,000000&ext=.png"; var tmpMarker = new GMarker(point, {icon: icon, hide: hideMarker}); GEvent.addListener(tmpMarker, "click", function() { tmpMarker.openInfoWindowHtml(html); }); return tmpMarker; } var map = new GMap2(document.getElementById("map_canvas")); map.addControl(new GSmallMapControl()); var mapLatLng = new GLatLng({{ place.lat }}, {{ place.lon }}); map.setCenter(mapLatLng, 12); map.addOverlay(new GMarker(mapLatLng)); // load initial markers from json array var markers = []; var initialBounds = map.getBounds(); for(var i = 0; i < places_json.places.length ; i++) { var latlng = new GLatLng(places_json.places[i].lat, places_json.places[i].lon); var html = "<strong><a href='/place/" + places_json.places[i].placesidx + "/" + places_json.places[i].area + "'>" + places_json.places[i].area + "</a></strong><br/>" + places_json.places[i].county; var hideMarker = true; if((initialBounds.getSouthWest().lat() < places_json.places[i].lat) && (places_json.places[i].lat < initialBounds.getNorthEast().lat()) && (initialBounds.getSouthWest().lng() < places_json.places[i].lon) && (places_json.places[i].lon < initialBounds.getNorthEast().lng()) && (places_json.places[i].placesidx != {{ place.placesidx }})) { hideMarker = false; } var marker = createMarker(latlng, html, hideMarker); markers.push(marker); } var markerCluster = new MarkerClusterer(map, markers, {maxZoom: 11}); </script>

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • Info on type family instances

    - by yairchu
    Intro: While checking out snoyman's "persistent" library I found myself wanting ghci's (or another tool) assistance in figuring out stuff. ghci's :info doesn't seem to work as nicely with type-families and data-families as it does with "plain" types: > :info Maybe data Maybe a = Nothing | Just a -- Defined in Data.Maybe ... > :info Persist.Key Potato -- "Key Potato" defined in example below data family Persist.Key val -- Defined in Database.Persist ... (no info on the structure/identity of the actual instance) One can always look for the instance in the source code, but sometimes it could be hard to find it and it may be hidden in template-haskell generated code etc. Code example: {-# LANGUAGE FlexibleInstances, GeneralizedNewtypeDeriving, MultiParamTypeClasses, TypeFamilies, QuasiQuotes #-} import qualified Database.Persist as Persist import Database.Persist.Sqlite as PSqlite PSqlite.persistSqlite [$persist| Potato name String isTasty Bool luckyNumber Int UniqueId name |] What's going on in the code example above is that Template-Haskell is generating code for us here. All the extensions above except for QuasiQuotes are required because the generated code uses them. I found out what Persist.Key Potato is by doing: -- test.hs: test = PSqlite.persistSqlite [$persist| ... -- ghci: > :l test.hs > import Language.Haskell.TH > import Data.List > runQ test >>= putStrLn . unlines . filter (isInfixOf "Key Potato") . lines . pprint where newtype Database.Persist.Key Potato = PotatoId Int64 type PotatoId = Database.Persist.Key Potato Question: Is there an easier way to get information on instances of type families and data families, using ghci or any other tool?

    Read the article

  • need to display info for user within active-directory

    - by Brad
    The following code will search for the user within the domain controller, but I want to display the info of each thing noted within the justthese variable: "displayname","mail","samaccountname","sn","givenname","department","telephonenumber" $dn = "dc=xxx,dc=xxx"; $justthese = array("displayname","mail","samaccountname","sn","givenname","department","telephonenumber"); $sr=ldap_search($ldapconn, $dn,'SAMAccountName=username', $justthese); $info = ldap_get_entries($ldapconn, $sr); echo "<h3>".$info["count"]." entries returned</h3>"; foreach($justthese as $key=>$value){ print '<p><strong>'.$value.'</strong></p>'; } It displays each item within the $justthese array, I want to display the info for that user for each thing noted in $justthese array. Right now it outputs it like this: displayname mail samaccountname sn givenname department telephonenumber I want it to have the actual data to the right of it, which I know I am doing something wrong with the foreach loop, any help is appreciated. So it'd look like this displayname Chuck mail [email protected] samaccountname chucknorris sn chuckisthebest givenname Chuck Norris department Security telephonenumber 555-555-5555

    Read the article

  • Pass variable to Info Window in FusionTableLayer

    - by user1030205
    I am building a web application that includes a Google Map layered with data from a Google Fusion Table. I have defined the info window for the markers in the Fusion Table and all is rendering as expected, but I have one issue. I need to pass a session variable from my web application to be included in the links that are defined in the info window, but can't seem to find a way to do this. Below is the javascript I am currently using to render the map: var myOptions = { zoom: 10, mapTypeId: google.maps.MapTypeId.ROADMAP, center: new google.maps.LatLng( 40.4230,-98.7372) } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); // Weather weatherLayer = new google.maps.weather.WeatherLayer({ temperatureUnits: google.maps.weather.TemperatureUnit.FAHRENHEIT }); weatherLayer.setMap(map); //Hobby Stores var storeLayer = new google.maps.FusionTablesLayer({ query: { select: "col2", from: "3991553" }, map: map, supressInfoWindows: true }); //Club Sites var siteLayer = new google.maps.FusionTablesLayer({ query: { select: "col13", from: "3855088" }, styles: [{ markerOptions: { iconName: "airports" }}], map: map, supressInfoWindows: true }); I'd like to be able to pass some type of parameter in the call to google.maps.FusionTableLayer that passes a value to be include in the info window, but can't find a way to do this. To view the actual page, visit www.dualrates.com. Enter your zipcode and select one of the airport markers to see the info window. You may have to zoom the map out to see an airfield.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >