Search Results

Search found 1781 results on 72 pages for 'cluster'.

Page 31/72 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • HDFS datanode startup fails when disks are full

    - by mbac
    Our HDFS cluster is only 90% full but some datanodes have some disks that are 100% full. That means when we mass reboot the entire cluster some datanodes completely fail to start with a message like this: 2013-10-26 03:58:27,295 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Mkdirs failed to create /mnt/local/sda1/hadoop/dfsdata/blocksBeingWritten Only three have to fail this way before we start experiencing real data loss. Currently we workaround it by decreasing the amount of space reserved for the root user but we'll eventually run out. We also run the re-balancer pretty much constantly, but some disks stay stuck at 100% anyway. Changing the dfs.datanode.failed.volumes.tolerated setting is not the solution as the volume has not failed. Any ideas?

    Read the article

  • VMware ESX 3.5 Host Health shown as unknown

    - by dunxd
    I have an ESX 3.5 update 5 cluster of five host servers, all fully patched as of this Friday. Today I noticed that one of the servers has the Hardware Health status as unknown in Virtual Center Infrastructure Client. When I look at the Health Status view under configuration for that host, all the items are status Unknown. The server is exactly the same configuration as the others - same model (HP DL360 G5), memory, NICs etc. I have tried restarting the management service with service mgmt-vmware restart but this has not resolved the issue. Asides from this, I am not seeing any issues with the cluster - however, I hate having a blind spot like this. Any ideas?

    Read the article

  • FullText SQL Server Clustered Service Move

    - by Steve
    I need to move the I drive from one san vendor to another. Microsoft states that you can't move the fulltext http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q304282& and the only way would be remove the cluster and re-install the cluster. Does anyuone know of any other way? Would swapping the old I drive for a new I drive work? Currently on the I drive I just have the fulltext directory and sql server log directory to move. The later I know how to handle. Any help would be greatly appreciated Windows 2003 - SQL Server 2005 32bit and yes this is one of the few 2005 instances that we have left Steve

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • Problem installing a w2k DC on Hyper-V?

    - by Tony
    Hi, We have a cluster with four node windows 2008 r2 and hyper-v installed. We would like to install 2 VM with role domain controller w2k (the domain is different from the domain of the hyper-v cluster). Do you know if there are any restriction on doing it? Some collegues say that we risk data corruption if we do live migrations. Others speak about the fact that Microsoft don't support w2k any more. And others have doubts because the global catalog server installed on these DC could have loss of performance. Any idea? Thanks Tony

    Read the article

  • Using rsyslog to create different log files for different processes

    - by user80203
    Scenario: I am running a cluster of machines. Each machine runs various python programs with a unique (across the cluster), but dynamically set, ID. Right now, they are all logging locally. So, I might have logs that look like: process_5.log process_6.log for processes that had ID's 5 and 6. Another machine may have: process_20.log process_25.log I wish to forward these logs to a logserver running rsyslogd. Python's logging facility has a nice syslog handler, so I understand how I could connect to the remote server. What I haven't figured out is how to use templating/DynFile to maintain log separation. e.g. on the logserver, I will want to see: process_5.log process_6.log process_20.log process_25.log which correspond to the logs of the same name on the sending machine. Is there a way to pull this off with rsyslogd templating?

    Read the article

  • How to update debian dns server? New VM with same hostname as old VM

    - by opensourcechris
    We run several linux VM's on our Hyper-V cluster. Our old IT manager configured the dns server to resolve the url 'devlabs.ourdomain.com' to a debian squeeze apache webserver hosted on the hyper v cluster with the hostname: devlabs. We recently created a new Ubuntu vm to replace the original squeeze vm. When we created the new Ubuntu VM we used the same hostname of 'devlabs" to name the new VM. My problem is that now I am only able to access the new Ubuntu VM by using the IP address. How can I update our DNS server to point the url 'devlabs.ourdomain.com' to the new VM?

    Read the article

  • Need information on a filesystem error:

    - by abc
    I have console access to an embedded linux device. This device has flash memory part of which is partitioned as a FAT filesystem. Its running linux-2.6.31. However I am seeing these errors on the console these days and the FAT file system becomes read only. 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) I cannot understand why this happened? What is the root cause? And what is the fix? I would appreciate answers that can point me how to investigate the possible root cause of this issue on the device.

    Read the article

  • ConfigurationErrorsException when serving images via UNC on IIS6

    - by Mark Richman
    I have a virtual directory in my web app which connects to a Samba share via UNC. I can browse the files via Windows Explorer without issue, but my web app throws a yellow screen with the following message: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: An error occurred loading a configuration file: Could not find file '\cluster\cms\qa-images\120400\web.config'. What makes no sense to me is why it's looking for a web.config in that location. I know it's not an authentication issue because the virtual directory can serve images from its root (i.e. \cluster\cms\qa-images\test.jpg serves as http://myserver/upload/test.jpg just fine).

    Read the article

  • ISCSI sessions appear from nowhere

    - by Maraca
    Hi, I am using Win2008 32bit Ent. running in Hyper-V with 2 LUNs over ISCSI connection (this is a MS cluster with one LUN being quorum and second as a storage). In ISCSI - target - details I see multiple sessions from same target (currently 7), however I am not sure where they are coming from as I have only one virtual NIC on this server. Sure enough 2 LUNs appear 7 times each in device manager or in disk manager. On the cluster partner however, I do not see that problem. There is only one session per target. Installing MPIO makes only difference - I am getting 8 sessions instead of 7 once I reboot. Does any one know what can cause this behavior?

    Read the article

  • Windows Load Balancing Services and File Shares

    - by cbkadel
    We are using Windows Load Balancing Services (WLBS). One of the things that I do notice, is that if I create a File Share on one of the physical hosts, I am able browse to that file share using the clustered-ip address. This might be a 'opinion' question, but I haven't been able to find much literature on file shares in particular with wlbs. Is this a recommendation configuration? Are there any limitations? What about when the share contains different sets of content on both hosts? For instance: Three 'hostnames' - host1 (physical1), host2 (physical2), and cluster. I create the following shares: \physical1\myshare \physical2\myshare What I notice is that i can see: \cluster\myshare I'm guessing that this is read-only, and that there's no file synchronization. But what happens if they are in fact out of sync, what would a network browser see then? Thanks for your time!

    Read the article

  • Passwordless ssh on multiple machines

    - by Phil
    Hi all, I'm quite confused with all the ssh security stuff. I am trying to reconfigure a system that is currently broken for reasons unknown. Machine A is your personal computer that you use whenever you're at home. Machine B is the head node of an HPC cluster and all the other machines C are all identically configured machines which share the home directories of machine B. This is an HPC cluster if you haven't guessed. How would I configure passwordless ssh between any nodes B or C. A can only get to C through sshing into B

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • Message Passing Interface (MPI)

    So you have installed your cluster and you are done with introductory material on Windows HPC. Now you want to develop an application with the most common programming model: Message Passing Interface.The MPI programming model is a standard with implementations from many vendors. For newbies (like myself!), I have aggregated below links for getting started.Non-Microsoft MPI resources (useful even if you are not on the Windows platform)1. Message Passing Interface on wikipedia. 2. The MPI standard.3. MPICH2 - an MPI implementation.4. Tutorial on MPI by William Gropp.5. MPI patterns presented as a tutorial with sample code. 6. THE official MPI Forum (maintains the standard) including the wiki discussing the MPI future.7. Great MPI tutorial including at the end the MPI Exercise.8. C++ MPI Exercises by John Burkardt.9. Book online: MPI The Complete Reference.MS-MPI10. Windows HPC Server 2008 - Using MS-MPI whitepaper (15 page doc).11. Tracing MPI applications (27 page doc).12. Using Microsoft MPI (TechNet section).13. Windows HPC Server MPI forum (for posting questions). MPI.NET14. MPI.NET Home Page (not owned by Microsoft).15. MPI.NET Tutorial.16. HPC Development using F# using MPI.NET (38 page doc).Next time I'll post resources for the Microsoft Cluster SOA programming model - happy coding... Comments about this post welcome at the original blog.

    Read the article

  • Discover the MySQL Connect Content Catalog!

    - by Bertrand Matthelié
    The MySQL Connect content catalog is now live! MySQL Connect offers you a unique opportunity to attend:Keynotes including: "The State of the Dolphin", by Oracle's Chief Corporate Architect Edward Screven and VP of MySQL Engineering Tomas Ulin. An exciting panel on "Current MySQL Usage Models and Future Developments" with Davi Arnaud from LinkedIn, Daniel Austin from PayPal, Mark Callaghan from Facebook and Calvin Sun from Twitter. Over 65 Conference sessions enabling you to hear from: Oracle MySQL engineers on MySQL 5.6, InnoDB, replication, performance tuning, security, NoSQL, MySQL Cluster, Big Data...and more. MySQL customers including the US Census Bureau, Big Fish Games, Booking.com, Ticketmaster, and Tumblr. Internationally recognized MySQL community members and partners on topics such as performance, MySQL 5.6, backup, MySQL in the Cloud, OpenStack and Hadoop. 6 Birds-of-a-feather sessions about sharding, replication, backup, and other subjects.8 Hands-On Labs designed to give you hands-on experience about MySQL replication, the MySQL Performance Schema, MySQL Cluster...and more.6 Tutorials providing you in-depth knowledge about MySQL Performance Tuning best practices, enhancing productivity with MySQL 5.6 new features or the essentials to get started with MySQL (tutorials are available as an add-on package to MySQL Connect registrants).Demo pods and exhibitors, to learn more about Partner’s and Oracle’s offerings.Receptions on both Saturday and Sunday nights, enabling you to ask all your questions to Oracle's MySQL engineers and to network with some of the world’s best MySQL professionals.Check out the MySQL Connect content catalog and find out about the amazing sessions you have the opportunity to attend.Reminder: The early bird discount is running until July 19, Register Now to save US$500! Plan to Attend Oracle OpenWorld or JavaOne? Add the MySQL Connect event to your Oracle OpenWorld or JavaOne registration for only US$100. Exhibit/Sponsorship opportunities are also available. We look forward to seeing you at MySQL Connect!

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

    - by Tamarick Hill
    The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series. SELECT * FROM sys.dm_server_services The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on. This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204542.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Grid Infrastructure Management Repository (GIMR) database now mandatory in Oracle GI 12.1.0.2

    - by Mike Dietrich
    During the installation of Oracle Grid Infrastructure 12.1.0.1 you've had the following option to choose YES/NO to install the Grid Infrastructure Management Repository (GIMR) database MGMTDB: With Oracle Grid Infrastructure 12.1.0.2 this choice has become obsolete and the above screen does not appear anymore. The GIMR database has become mandatory.  What gets stored in the GIMR? See the documentation here See the changes in Oracle Clusterware 12.1.0.2 here: Automatic Installation of Grid Infrastructure Management Repository The Grid Infrastructure Management Repository is automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). The Grid Infrastructure Management Repository enables such features as Cluster Health Monitor, Oracle Database QoS Management, and Rapid Home Provisioning, and provides a historical metric repository that simplifies viewing of past performance and diagnosis of issues. This capability is fully integrated into Oracle Enterprise Manager Cloud Control for seamless management. Furthermore what the doc doesn't say explicitly: The -MGMTDB has now become a single-tenant deployment having a CDB with one PDB This will allow the use of a Utility Cluster that can hold the CDB for a collection of GIMR PDBs When you've had already an Oracle 12.1.0.1 GIMR this database will be destroyed and recreated Preserving the CHM/OS data can be acchieved with OCULMON to dump it out into node view The data files associated with it will be created within the same disk group as OCR and VOTING  In a future release there may be an option offered to put in into a separate disk group Some important MOS Notes: MOS Note 1568402.1FAQ: 12c Grid Infrastructure Management Repository, states there's no supported procedure to enable Management Database once the GI stack is configured MOS Note 1589394.1How to Move GI Management Repository to Different Shared Storage(shows how to delete and recreate the MGMTDB) MOS Note 1631336.1Cannot delete Management Database (MGMTDB) in 12.1 -Mike

    Read the article

  • links for 2010-12-15

    - by Bob Rhubart
    Pravin Janardanam: Security in OBIEE 11g, Part 1 Guest blogger Pravin Janardanam kicks off a two-part series in which he tackles the differences in security between OBIEE 11g and 10g, and provides some hints on security migration from a 10g environment. (tags: oracle otn businessintelligence obiee) HttpClusterServlet Configuration (Weblogic Server Acting as a Proxy) Quick tips from Divay Dureja. (tags: oracle weblogic servlet configuration) Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration "The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components." - from the white paper. (tags: oracle otn oraclevm virtualization) A SOA Safari (Antony Reynolds' Blog) SOA author Antony Reynolds shares links to some of his favorite SOA titles available for reading on Safari. (tags: oracle otn soa) Using Crossbow and Solaris 11 Express Zones for a single machine proof of concept environment with Puppet "My last blog entry was about my debugging experience with Puppet and promise to share the setup that I used. I now follow up that previous entry with this one which describes my Crossbow + NAT + S11 Zones proof of concept." - Michael Tin (tags: oracle solaris crossbow) @myfear: One thing you did not know about Java EE class loading in GlassFish 2.x "Be careful migrating apps from one app server to the other. And don't expect to have a strong hierarchical class loader in place. That is especially true for GF 2.x class loading." Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java glassfish weblogic)

    Read the article

  • What's New in 5.6 RC and more from MySQL Connect conference

    - by Rob Young
    Keeping with the tradition of great MySQL Community events, the first annual MySQL Connect conference is now in the books.  It was great to see so many familiar faces in the crowd and at the podium sharing their ideas and thoughts on the evolution of MySQL under Oracle. The headliner of the conference was Tomas' keynote announcement of the fully featured and fully enabled MySQL 5.6 Release Candidate.  This new article on the MySQL DevZone summarizes all of the great new features ready for Community adoption, all MySQL Engineering blogs and where and how to download all of the bits. As always, early adoption and feedback on the 5.6 RC is appreciated and the sooner we get your feedback the sooner we release the "ready for production" sanctioned GA product.    Also available now, Cluster 7.3 provides support for Foreign Keys, node.js NoSQL access to underlying data and a new Auto Installer that helps you quickly and easily get up and running with Cluster 7.2 and 7.3.  The 7.3 downloads are provided in the first 7.3 Development Milestone Release (under "Development Releases" tab) and via the MySQL Labs. Oracle also announced key new additions to MySQL Enterprise Edition: New policy-based compliance Auditing. MySQL Enterprise Edition Audit adds policy-based auditing compliance to existing MySQL applications without the need to change any code.  This new plugin is available for MySQL 5.5.28 and higher; existing MySQL Enterprise Edition customers can download the upgrade from the My Oracle Support portal and all can download for evaluation from Oracle's Software Delivery Cloud. New MySQL Enterprise High Available additions provide even more options for ensuring MySQL applications remain available and running a their peak: Oracle Linux + DRBD Oracle Solaris Clustering for MySQL All in all, the first MySQL Connect conference was a great success and with refinements planned in response to attendee, sponsor and speaker feedback we expect it to grow and improve going forward. As always, thanks for your continued support of MySQL!

    Read the article

  • Email Alias [email protected] Replaced with New Oracle Certification Support Tool

    - by Paul Sorensen
    All Oracle Certification customer service issues previously sent to [email protected], [email protected], [email protected], or [email protected], should now be submitted as service requests via the new request tool. Support via these email aliases ends today. Managing candidate communications via this tool will enable better issue tracking capabilities and ensure that all issues are handled quickly and efficiently. The integrated tool will also help us to more easily research historical and related issues to enable improved certification communications and business processes. For now, questions related to Java, Oracle Solaris (Cluster), MySQL, NetBeans or OpenOffice.org exam or certification, will still be sent to [email protected] and resolved via email. Questions related to the status of an Oracle Certification Success Kit, will still be sent to [email protected] and resolved via email. ?We are excited about this new offering and ?c?o?n?t?i?n?u?e? ??t?o??????? ?w?o?r?k? ?t?o?w?a?r?d ?improve?d customer ?s?e?r?v?i?c?e?? for our OCP community. Thank you for your cooperation! Quick View of Oracle Certification Customer Support Oracle Certification Support: All issues that previously would have been sent to [email protected] [email protected]: All questions on Java, Oracle Solaris (Cluster), MySQL, NetBeans, OpenOffice.org exams and certifications [email protected]: All questions on the status of your Oracle Certification Success Kit

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >