Search Results

Search found 5298 results on 212 pages for 'automated deploy'.

Page 86/212 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Internet Explorer 10 Release Preview now available for Windows 7 SP1!

    - by KeithMayer
    This week, the IE team released IE 10 Release Preview for Windows 7 SP1 and Windows Server 2008 R2 SP1!  You can download IE10 Release Preview for evaluation and testing (remember, it's still pre-release software) from the following link location ... Download IE10 Release Preview: http://windows.microsoft.com/en-US/internet-explorer/downloads/ie-10/worldwide-languages You can get at overview of What's New in Internet Explorer 10 at: Internet Explorer 10 FAQ for IT Pros Of course, you can also get the full release of IE10 by downloading Windows 8 at http://aka.ms/dlw8rtm What's Next? After downloading IE10 Release Preview, begin setting up your lab environment to plan for how you'll customize and deploy IE10 in your environment when it's released with these resources: IE10 Customization and Administration Internet Explorer Administration Kit (IEAK) 10 Group Policy Settings Reference Hope this helps! Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • Best Practices in Setting up a Build and Deployment environment for the Java Platform

    - by Genadinik
    I have a project for which "quick and dirty" isn't the best solution. What is the most stable and currently accepted set of procedures/tools that I should look into when setting up my build/deploy dev (and later production) environment? What I mean is: Should I use ANT? Or has there been something better that has evolved? In what instances should I use Maven? What are some best practices to create a continuous integration/deployment environment? What are best practices for doing test-driven development? Anything else?

    Read the article

  • Error occurred in deployment step 'Activate Features': The field with Id {GUID} defined in feature {GUID} was found in the current site collection or in a subsite.

    - by Jayant Sharma
    Hi all, In SharePoint 2010, This is rare error, I got when I deploy and activate Feature using VS2010. Deployment works file  but in activation process it get stuct and throws error. Error occurred in deployment step 'Activate Features': The field with Id {GUID} defined in feature {GUID} was found in the current site collection or in a subsite. When I googled I found very good solution  from Sandeep Snahta Blog. http://snahta.blogspot.hk/2011/10/error-in-activate-features-from-visual.html As suggested in this blog, there is two option to overcome this error; Close VS2010 and restart again. Or Kill VSSHost4 Process either through Task Manager or Via Power Shell Command    stop-process -processname vssphost4 -force   Jayant Sharma

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • Help deploying using Capistrano to HostGator

    - by Kyle Macey
    My company uses HostGator to host our web sites, and I'm having a heck of a time figuring out what my final steps are to get a functioning RoR app up there. I've got all the way up to configuring mongrel (I think?) and being able to run deploy:cold without any errors. However, I can't seem to get the app to show up in the designated CPanel area (HG says the name "current" is already reserved for another application), and I'm not sure which port was allowed for me to use. I've opened tickets with Customer Support just to be told that "You can't access the database with root"... Totally unrelated to my question... So I think I'm in the final stretch and if anyone has any insight or experience with HostGator, please cue me in.

    Read the article

  • Going Direct to Consumer in Consumer Goods – Live Webcast April 12

    - by Michael Seback
    Going Direct to Consumer is top of mind with executives in the Consumer Goods (CG) industry today.   Join our live webcast on Thursday, April 12 to learn what CG companies worldwide are thinking as they deploy their direct-to-consumer strategies in an effort to better engage with today’s empowered consumer. Hear Jon Copestake, Chief Consumer Goods Analyst of the Economist Intelligence Unit and Oracle to discuss the findings and industry trends. Some key findings include: Pushing traditional media through new media channels is not enough to reach today’s more plugged in, product-savvy consumer CG companies are experimenting with new ways to establish and enhance direct, two-way relationships with their target consumers across multiple channels Survey respondents and other CG executives see their nascent e-commerce efforts as complimentary to, not competing with, existing retail channels. Register to attend on April 12, 8:00 a.m. PT / 11:00 p.m. ET  

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Announcement: Oracle Solaris 11.1

    - by uwes
    On October 3rd at Oracle OpenWorld John Fowler announced Oracle Solaris 11.1 . This first update to Oracle Solaris 11 increases uptime for the Oracle Database: 8x faster database shutdown and start-up Helps DBAs find and resolve I/O issues increasing performance 1.2x Oracle RAC throughput Oracle Solaris 11.1 drives up network utilization by extending network virtualization to include Edge Virtual Bridging and Data Center Bridging that help manage network bandwidth for high priority services and applications. Learn more and share these valuable tools with your customers to encourage them to deploy Oracle Solaris 11.1 Read Press Release here Oracle Solaris 11.1 Data Sheet (PDF) What's New in Solaris 11.1 Oracle Solaris 11.1 FAQs Join the the online web event Oracle Solaris 11 Innovations for your Data Center on November 7, 2012

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • Networking not starting - OpenVZ default Ubuntu 12.04 Template

    - by Stu2000
    I have been using HyperVM to deploy OpenVZ Ubuntu 12.04 server instances. Unfortunately, they all boot without networking enabled. I tried inserting int the crontab: @reboot /usr/sbin/service networking restart But this didn't work. Running this command (without @reboot) from the CLI will result in the network starting and being able to use commands like ifconfig so I get the feeling it is just a matter of the cron being called too soon? What is the best practice for resolving the issue. I am guessing something along the lines of adding an upstart job?

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Repurpose a Kobo Reader as a Weather Station

    - by Jason Fitzpatrick
    Last month we showed you a clever hack that converted an old Kindle into a weather station; this new hack uses a Kobo ebook reader (cheaper hardware) and ditches the external server (less overhead). Even better, this new hack is simpler to deploy. You’ll need a Kobo unit, a free installer bundle courtesy of Kevin Short over at the MobileRead forums, and a few minutes to toggle on telnet access to your Kobo and run the installer. Hit up the link below to read more about the self-contained mod. Kobo Wi-Fi Weather Forecast [via Hack A Day] Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Developer Webinar Toay:"Publishing IPS PAckages"

    - by user13333379
    Oracle's Solaris Organization is pleased to announce a Technical Webinar for Developers on Oracle Solaris 11: "Publishing IPS Packages" By Eric Reid (Principal Software Engineer) today June 19, 2012 9:00 AM PDT This bi-weekly webinar series (every other Tuesday @ 9 a.m. PT) is designed for ISVs, IHVs, and Application Developers who want a deep-dive overview about how they can deploy Oracle Solaris 11 into their application environments. This series will provide you the unique opportunity to learn directly from Oracle Solaris ISV Engineers and will include LIVE Q&A via chat with subject matter experts from each topic area. Any OTN member can register for this free webinar here.  Today's webinar is a deep dive into IPS. The attendees of the initial IPS webinar asked for more information around this topic. Eric Reid who worked with leading software vendors (ISVs) to migrate Solaris 10 System V packages to IPS will share his experience with us. 

    Read the article

  • Free Version of Oracle Application Development Framework (Oracle ADF)

    - by Steve Muench
    I'm very happy to finally be able to talk about this. A long time coming, the press release is finally out: Oracle Introduces Free Version of Oracle Application Development Framework New Oracle ADF Essentials Brings ADF Benefits to the Broader Developer Community Oracle ADF Essentials is a free packaging of core technologies from the Oracle Application Development Framework that can be used to develop and deploy applications that include ADF Business Components, ADF Controller, ADF Binding, and ADF Faces Rich Client Components without incurring licensing costs. Both Oracle JDeveloper and Oracle Enterprise Pack for Eclipse provide visual and declarative development experience for using it. Oracle ADF Essentials comes with specific instructions and certification for deploying applications on the open-source Glassfish server, but the license is not limited to that server. For more information and to download it (it's only 20MB), see Oracle ADF Essentials page on OTN.

    Read the article

  • Client/Server Application Using Google App Engine

    - by Kevin Zhang
    Can someone please advise me what is the possible solution of using GAE to make a Client/Serer Application? As far as I know, GAE is designed to do web applications. What I want to do is to have a Java Client(Swing based) deployed on a number of computers and deploy the server on GAE. I found an example on GAE website which teaches how to make a SOAP service using GAE, but I don't know whether using SOAP is a good idea for client/server applications. Can someone give me some hints about how to design this system and what technology should be used? Any advices are welcome. Many thanks.

    Read the article

  • Looking for a simple to use email server that can be programmatically (preferably remotely) used

    - by sr2222
    I've been poking around the internet for much of the day, but I can't seem to find a good server to fit my needs. What I need is a simple to use and deploy (pref open source) lightweight email server that I can create users on programmaticly that has IMAP or POP support. I'd prefer something with an existing service interface, but if I have to write a REST API on top of an easy to use API, that's acceptable. The purpose of this tool will be to allow a test automation framework to create new email accounts and retrieve email sent to those addresses. I need text, html, and possibly attachment support as well. Perhaps it's my noobishness, but I can't really suss out the details from the documentation on the servers available out there to figure out which fit my needs.

    Read the article

  • Oracle makes Virtualized Java Applications Practical. Announces Brand New Products

    - by blake.connell
    New Oracle Virtual Assembly Builder and Oracle WebLogic Suite Virtualization Option make running Java applications in a virtual environments easy and practical. • Oracle Virtual Assembly Builder is a new product designed to help organizations quickly and easily deploy multi-tier enterprise applications in virtualized environments. It enables administrators to quickly configure and provision these applications. • Oracle WebLogic Suite Virtualization Option delivers Oracle WebLogic Server on Oracle JRockit Virtual Edition delivering 'near-native' performance and increased server density. • Oracle WebLogic Server on Oracle JRockit Virtual Edition runs directly on Oracle VM without a guest operating system, a unique capability resulting in better performance and more application server runtime per system. Oracle WebLogic Suite Virtualization Option and Oracle Virtual Assembly Builder can drive operational efficiency and agility. Customers can dynamically scale up/down the underlying software infrastructure and applications with ease through software automation. Register for a live webinar with Oracle product experts Read the press release For more product information: Oracle Virtual Assembly Builder Oracle WebLogic Suite Virtualization Option

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Eric Nelson
    Yey – we have a public release of the Windows Azure Tools which fully supports Visual Studio 2010 RTM and the .NET 4 Framework. And the biggy I have been waiting for – IntelliTrace support to debug your cloud deployed services (Requires  VS2010 Ultimate) Download today http://bit.ly/azuretoolsjune New for version 1.2: Visual Studio 2010 RTM Support: Full support for Visual Studio 2010 RTM. .NET 4 support: Choose to build services targeting either the .NET 3.5 or .NET 4 framework. Cloud storage explorer: Displays a read-only view of Windows Azure tables and blob containers through Server Explorer. Integrated deployment: Deploy services directly from Visual Studio by selecting ‘Publish’ from Solution Explorer. Service monitoring: Keep track of the state of your services through the ‘compute’ node in Server Explorer. IntelliTrace support for services running in the cloud: Adds support for debugging services in the cloud by using the Visual Studio 2010 IntelliTrace feature. This is enabled by using the deployment feature, and logs are retrieved through Server Explorer. Related Links: http://ukazure.ning.com for UK fans of Windows Azure IntelliTrace explained

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Jim Duffy
    I have good news! Microsoft has released the June 2010 Windows Azure Tools + SDK. These new tools extend Visual Studio (both VS 2010 & VS 2008) for Windows Azure development. With these tools you can create, configure, debug, build, run, and deploy scalable web apps on Windows Azure. At first glance what I see as some of the most interesting points of interest are the fact that Visual Studio 2010 RTM is fully supported as well as .NET 4 support. You can choose to build your apps with the .NET 3.5 or .NET 4 frameworks. Another area of interest that I’ll be digging into is the cloud storage explorer. It provides a read-only view of your Windows Azure tables and blob containers from within Visual Studio via the Server Explorer. I’m sure I’ll have more to say about the Windows Azure Tools as I dig deeper… Have a day. :-|

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • What Are Oracle Users Doing to Improve Availability and Disaster Recovery?

    - by jgelhaus
    What Are Oracle Users Doing to Improve Availability and Disaster Recovery? Download the recent database availability survey report, Enterprise Data and the Cost of Downtime, and watch the Webcast highlighting the results. The survey, conducted by the Independent Oracle Users Group, examined more than 350 data managers and professionals regarding planned and unplanned downtime, database high availability, and disaster recovery solutions. The findings will help you learn about: Leading causes of planned and unplanned downtime of Oracle users Different approaches to database high availability Why a majority of Oracle users commonly deploy Oracle Database with Oracle Real Application Clusters, Oracle Active Data Guard, and Oracle GoldenGate Register now to download the report and watch the Webcast.

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Visual Studio 2010 Setup Projects and x64 Support

    - by Shawn Cicoria
    I was taking the Windows Azure CmdLets project and getting it into an MSI just to make it easier to deploy in a nice package.  I ran into problems with the Setup project not being able to properly establish the right registry settings for an x64 environment. Even though you set the target platform on the Setup project to x64 the InstallUtil.lib that get’s run is still x86.  In order to have it work property, you need to follow the steps identified here: http://msdn.microsoft.com/en-us/library/kz0ke5xt.aspx  The section “64-bit managed custom actions throw a System.BadImageFormatException exception” covers the steps you need to follow, using the Orca MSI editor to replace the InstallUtilLib.dll from the one that the Setup Project embeds (x86) to a x64 version. Now, works like a charm… Resultant installer here: http://cicoria.com/Downloads/AzureManagementCmdletsInstall.msi The CmdLets are the same ones from the Training Kit – November 2010 release.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >