Search Results

Search found 3295 results on 132 pages for 'solaris cluster'.

Page 21/132 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Oracle Solaris 11 Cheat Sheet

    - by Markus Weber
    Need to quickly know, or be reminded about, how to create network configuration profiles in Oracle Solaris 11 ?How to configure VLANS ?How to manipulate Zones ?How to use ZFS shadow migration ? To have those answers, and many more, neatly in front of you, we created this cheat sheet (pdf). Originally developed by Joerg Moellenkamp, the author of the very popular blog c0t0d0s0.org, and of the "Less Known Solaris Features", some more people at Oracle jumped in and added more and more very useful commands to it. And it may keep evolving, so keep checking ! The link to it can also be found on our new Oracle Solaris Evaluation page.

    Read the article

  • 20 Years of Solaris - 25 Years of SPARC!

    - by Stefan Hinker
    I don't usually duplicate what can be found elsewhere.  But this is worth an exception. 20 Years of Solaris - Guess who got all those innovation awards!25 Years of SPARC - And the future has just begun :-) Check out those pages for some links pointing to the past, and, more interesting, to the future... There are also some nice videos: 20 Years of Solaris - 25 Years of SPARC (Come to think of it - I got to be part of all but the first 4 years of Solaris.  I must be getting older...)

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Excel process not ending in Cluster environment

    - by Vasanth
    When we try to close excel object, it fails to close to cluster environment. The same is working fine in QA and UAT environment. public bool KillExcelProcess() { try { object misValue = System.Reflection.Missing.Value; wbObj.Save(); wbObj.Close(true, misValue, misValue); appC.Workbooks.Close(); appC.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(objSheet); System.Runtime.InteropServices.Marshal.ReleaseComObject(wbObj); System.Runtime.InteropServices.Marshal.ReleaseComObject(appC); wbObj = null; appC = null; } catch (Exception ex) { //throw ex; } finally { System.Threading.Thread.Sleep(5000); GC.Collect(); } return true; Calling function #endregion try { log.Info("CloseExcelService (MeasureSavingsComputeBO) Starts ..."); exConverter.KillExcelProcess(); while (true) { try { File.Delete(strFilename); break; } catch (Exception ex) { } }

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

  • Installing Solaris with Window. Disk partitioning problem.

    - by RishiPatel
    I am running windows and want to install the solaris but having hard time while using installer. I have 1 primary partition of 40GB and one extended partition. Extended partition have 4 logical drive. Solaris disk management window show only two partition one is of 40GB(Primary) and second is Extended partition. Can i convert a logical drive into primary partition( I have one free of 25 GB). Please look at the screenshot of disk management utitlity of window. http://img195.imageshack.us/img195/2005/20726948.jpg Is there any way to install solaris without reformatting the and repartitioning whole drive? In case it is not possible how should i partition and using which utility ?

    Read the article

  • How to configure mod_cluster, httpd, Jboss 7.1.1?

    - by user180157
    I am trying develop clustered environment in Jboss 7.1.1. For mod_cluster setup I have done the following steps. Downloaded httpd 2.4 placed @C:\Apache24 Placed below .so files in C:\Apache24\modules mod_proxy.so mod_proxy_ajp.so mod_slotmem.so mod_manager.so mod_proxy_cluster.so mod_advertise.so Using cmd, when I run C:\Apache24\bin: httpd.exe am getting below error and apache is not getting started. httpd.exe: Syntax error on line 528 of C:/Apache24/conf/httpd.conf: Cannot load modules/mod_slotmem.so into server: The specified procedure could not be found. Could anyone provide the solution? Thanks

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Learn More About the Scalability, Uptime, and Agility of MySQL Cluster

    - by Antoinette O'Sullivan
    Learn more about the uncompromising scalability, uptime, and agility of MySQL Cluster by taking the authentic MySQL Cluster training course. During this three day class, you will learn how to properly configure and manage the cluster nodes to ensure high availability, how to install the different nodes as well as get a better understanding of the internals of the cluster. Events currently on the schedule for this class include:  Location  Date  Delivery Language  Wein, Austria  4 February 2013  German London, England  12 June 2013   English  Rennes, France 26 February 2013   French  Hamburg, Germany 21 January 2013   German  Munich, Germany  10 June 2013 German   Stuttgart, Germany  26 March 2013  German  Budapest, Hungary  19 June 2013  Hungarian  Milan, Italy  4 February 2013  Italy  Warsaw, Poland  18 March 2013  Polish  Barcelona, Spain  4 March 2013  Spanish  Madrid, Spain 25 February 2013   Spanish Chicago, United States  27 March 2013   English  Reston, United States  6 February 2013  English  Jakarta, Indonesia 21 January 2013  English   Singapore 18 February 2013   English To register for an event or to see further details on this or other courses in the authentic MySQL curriculum, please go to http://oracle.com/education/mysql.

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • Jumpstart your MySQL Cluster Knowledge

    - by Antoinette O'Sullivan
    Join companies in the web, gaming, telecoms and mobile areas by learning about MySQL Cluster's distributed, shared-nothing, real-time design. The 3 days, MySQL Cluster course teaches you how to configure and manage the cluster nodes to ensure high availability. Learn how to install different nodes and understand cluster internals. Here is a sample of some events on the schedule for this course:  Location  Date  Delivery Language  Wien, Austria  4 February, 2013 German   Prague, Czech Republic  10 December, 2012 Czech   London, England  12 December, 2012 English   Hamburg, Germany  21 January, 2013  German  Stuttgart, Germany  26 March, 2013  German  Budapest, Hungary  4 December, 2012  Hungarian  Warsaw, Poland  10 December, 2012  Polish  Lisbon, Portugal  3 December, 2012 European Portugese   Barcelona, Spain  19 November, 2012 Spanish   Madrid, Spain  25 February, 2013 Spanish   Jakarta, Indonesia  21 January, 2013 English   Singapore  29 October, 2012 English   Chicago, United States  27 March, 2013  English  Reston, United States  6 February, 2013  English For more information on the authentic MySQL curriculum go to http://oracle.com/education/mysql

    Read the article

  • gpfs: adding a new nsd server to a cluster

    - by alessandra
    I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks. I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the cluster so that the disks are available to all the nodes. Node B is connected to the cluster via ethernet. How can I make the new nsd seen by all the nodes of the cluster? I can create the new nsd but when trying to create the filesystem on node B it the command mmcrfs times out. It looks like the nodes of the cluster cannot understand the filesystem location even if I specify them attached to server B in the description file. Would it be better to remove node B from the cluster, create a cluster on its own with its attached filesystem and connect it remotely with the previous cluster? Or a clustered NFS solution would apply better? Can you please give me any suggestion?

    Read the article

  • 11gR2 ???:Oracle Cluster Health Monitor(CHM)??

    - by JaneZhang(???)
       Cluster Health Monitor(????CHM)???Oracle?????,?????????????(CPU????SWAP????I/O?????)??????CHM??????????   ??????????????????????Hang?????(Eviction)????????????????,??????CHM????????????????????,????????????? CHM???????????:    11.2.0.2 ?????? Oracle Grid Infrastructure for Linux (???Linux Itanium) ?Solaris (Sparc 64 ? x86-64)    11.2.0.3 ????? Oracle Grid Infrastructure for AIX ? Windows (???Windows Itanium)?    ????,???????????CHM?????(ora.crf)???:    $ crsctl stat res -t -init    --------------------------------------------------------------------------------    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       Cluster Resources ora.crf        ONLINE  ONLINE       rac1 CHM????????:    1). System Monitor Service(osysmond):?????????????,osysmond????????????????cluster logger service,???????????????????CHM?????      $ ps -ef|grep osysmond       root      7984     1  0 Jun05 ?        01:16:14 /u01/app/11.2.0/grid/bin/osysmond.bin    2). Cluster Logger Service(ologgerd):???????,ologgerd ???????(master),???????(standby)??ologgerd???????????????,??????????     ???:     $ ps -ef|grep ologgerd       root      8257     1  0 Jun05 ?        00:38:26 /u01/app/11.2.0/grid/bin/ologgerd -M -d       /u01/app/11.2.0/grid/crf/db/rac2     ???:      $ ps -ef|grep ologgerd       root      8353     1  0 Jun05 ?        00:18:47 /u01/app/11.2.0/grid/bin/ologgerd -m rac2 -r -d /u01/app/11.2.0/grid/crf/db/rac1 CHM Repository:?????????,?????,????Grid Infrastructure home ? ,??1 GB ?????,???????????0.5GB???? ?????OCLUMON??????????????????(??????3????)? ??????????????:     $ oclumon manage -get reppath       CHM Repository Path = /u01/app/11.2.0/grid/crf/db/rac2       Done     $ oclumon manage -get repsize       CHM Repository Size = 68082 <====????       Done     ????:     $ oclumon manage -repos reploc /shared/oracle/chm      ????:     $ oclumon manage -repos resize 68083 <==?3600(??) ? 259200(3?)??      rac1 --> retention check successful      New retention is 68083 and will use 1073750609 bytes of disk space      CRS-9115-Cluster Health Monitor repository size change completed on all nodes.      Done ??CHM???????????:     1. ?????Grid_home/bin/diagcollection.pl:         1). ??,??cluster logger service????:         $ oclumon manage -get master         Master = rac2         2).?root??????rac2???????:         # <Grid_home>/bin/diagcollection.pl -collect -chmos -incidenttime inc_time -incidentduration duration         inc_time?????????????,???MM/DD/YYYY24HH:MM:SS, duration??????????????????         ??:# diagcollection.pl -collect -crshome /u01/app/11.2.0/grid -chmoshome  /u01/app/11.2.0/grid -chmos -incidenttime 06/15/201215:30:00 -incidentduration 00:05       3).????????,CHM?????????chmosData_rac2_20120615_1537.tar.gz?    2. ??????CHM?????????oclumon:        $oclumon dumpnodeview [[-allnodes] | [-n node1 node2] [-last "duration"] | [-s "time_stamp" -e "time_stamp"] [-v] [-warning]] [-h]        -s??????,-e??????        $ oclumon dumpnodeview -allnodes -v -s "2012-06-15 07:40:00" -e "2012-06-15 07:57:00" > /tmp/chm1.txt       $ oclumon dumpnodeview -n node1 node2 node3 -last "12:00:00" >/tmp/chm1.txt       $ oclumon dumpnodeview -allnodes -last "00:15:00" >/tmp/chm1.txt ???/tmp/chm1.txt??????:----------------------------------------Node: rac1 Clock: '06-15-12 07.40.01' SerialNo:168880----------------------------------------SYSTEM:#cpus: 1 cpu: 17.96 cpuq: 5 physmemfree: 32240 physmemtotal: 2065856 mcache: 1064024 swapfree: 3988376 swaptotal: 4192956 ior: 57 iow: 59 ios: 10 swpin: 0 swpout: 0 pgin: 57 pgout: 59 netr: 65.767 netw: 34.871 procs: 183 rtprocs: 10 #fds: 4902 #sysfdlimit: 6815744 #disks: 4 #nics: 3  nicErrors: 0TOP CONSUMERS:topcpu: 'mrtg(32385) 64.70' topprivmem: 'ologgerd(8353) 84068' topshm: 'oracle(8760) 329452' topfd: 'ohasd.bin(6627) 720' topthread: 'crsd.bin(8235) 44'PROCESSES:name: 'mrtg' pid: 32385 #procfdlimit: 65536 cpuusage: 64.70 privmem: 1160 shm: 1584 #fd: 5 #threads: 1 priority: 20 nice: 0name: 'oracle' pid: 32381 #procfdlimit: 65536 cpuusage: 0.29 privmem: 1456 shm: 12444 #fd: 32 #threads: 1 priority: 15 nice: 0...name: 'oracle' pid: 8756 #procfdlimit: 65536 cpuusage: 0.0 privmem: 2892 shm: 24356 #fd: 47 #threads: 1 priority: 16 nice: 0----------------------------------------Node: rac2 Clock: '06-15-12 07.40.02' SerialNo:168878----------------------------------------SYSTEM:#cpus: 1 cpu: 40.72 cpuq: 8 physmemfree: 34072 physmemtotal: 2065856 mcache: 1005636 swapfree: 3991808 swaptotal: 4192956 ior: 54 iow: 104 ios: 11 swpin: 0 swpout: 0 pgin: 54 pgout: 104 netr: 77.817 netw: 33.008 procs: 178 rtprocs: 10 #fds: 4948 #sysfdlimit: 6815744 #disks: 4 #nics: 4  nicErrors: 0TOP CONSUMERS:topcpu: 'orarootagent.bi(8490) 1.59' topprivmem: 'ologgerd(8257) 83108' topshm: 'oracle(8873) 324868' topfd: 'ohasd.bin(6744) 720' topthread: 'crsd.bin(8362) 47'PROCESSES:name: 'oracle' pid: 9040 #procfdlimit: 65536 cpuusage: 0.19 privmem: 6040 shm: 121712 #fd: 33 #threads: 1 priority: 16 nice: 0...  ??CHM?????,???Oracle????:  http://docs.oracle.com/cd/E11882_01/rac.112/e16794/troubleshoot.htm#CWADD92242  Oracle® Clusterware Administration and Deployment Guide  11g Release 2 (11.2)  Part Number E16794-17  ?? My Oracle Support??:  Cluster Health Monitor (CHM) FAQ (Doc ID 1328466.1)

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • Building Private IaaS with SPARC and Oracle Solaris

    - by ferhat
    A superior enterprise cloud infrastructure with high performing systems using built-in virtualization! We are happy to announce the expansion of Oracle Optimized Solution for Enterprise Cloud Infrastructure with Oracle's SPARC T-Series servers and Oracle Solaris.  Designed, tuned, tested and fully documented, the Oracle Optimized Solution for Enterprise Cloud Infrastructure now offers customers looking to upgrade, consolidate and virtualize their existing SPARC-based infrastructure a proven foundation for private cloud-based services which can lower TCO by up to 81 percent(1). Faster time to service, reduce deployment time from weeks to days, and can increase system utilization to 80 percent. The Oracle Optimized Solution for Enterprise Cloud Infrastructure can also be deployed at up to 50 percent lower cost over five years than comparable alternatives(2). The expanded solution announced today combines Oracle’s latest SPARC T-Series servers; Oracle Solaris 11, the first cloud OS; Oracle VM Server for SPARC, Oracle’s Sun ZFS Storage Appliance, and, Oracle Enterprise Manager Ops Center 12c, which manages all Oracle system technologies, streamlining cloud infrastructure management. Thank you to all who stopped by Oracle booth at the CloudExpo Conference in New York. We were also at Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, discussing how this solution can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. Designed, tuned, and tested, Oracle Optimized Solution for Enterprise Cloud Infrastructure is a complete cloud infrastructure or any virtualized environment  using the proven documented best practices for deployment and optimization. The solution addresses each layer of the infrastructure stack using Oracle's powerful SPARC T-Series as well as x86 servers with storage, network, virtualization, and management configurations to provide a robust, flexible, and balanced foundation for your enterprise applications and databases.  For more information visit Oracle Optimized Solution for Enterprise Cloud Infrastructure. Solution Brief: Accelerating Enterprise Cloud Infrastructure Deployments White Paper: Reduce Complexity and Accelerate Enterprise Cloud Infrastructure Deployments Technical White Paper: Enterprise Cloud Infrastructure on SPARC (1) Comparison based on current SPARC server customers consolidating existing installations including Sun Fire E4900, Sun Fire V440 and SPARC Enterprise T5240 servers to latest generation SPARC T4 servers. Actual deployments and configurations will vary. (2) Comparison based on solution with SPARC T4-2 servers with Oracle Solaris and Oracle VM Server for SPARC versus HP ProLiant DL380 G7 with VMware and Red Hat Enterprise Linux and IBM Power 720 Express - Power 730 Express with IBM AIX Enterprise Edition and Power VM.

    Read the article

  • Webcast: Introducing the New Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    The Fastest Way to Virtualize Your Datacenter Join our webcast “Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM” Tues., January 25, 2011 9 a.m. PT / 12 p.m. ET Presented by: Marc Shelley, Senior Manager, Oracle Blades Product Management Tom Lisjac, Senior Member, Oracle Technical Staff Register now for our live webcast! The Oracle VM blade cluster reference configuration addresses the key challenges associated with deploying a virtualization infrastructure. It eliminates or significantly reduces the assembly and integration of the following components BY UP TO 98%: Servers Storage Network Connections Virtualization Software Operating Systems Attend this fact-filled, how-to Webcast with Oracle experts to learn the best practices for deploying the reference configuration for Oracle VM Server for x86 and Sun Blade and Sun Fire x86 rack mount servers. Virtualization is easier than ever with this new configuration. Register now for our live webcast! For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Oracle Sparc e Solaris - Performance Máxima para Aplicações de Missão Crítica

    - by Wesley Faria
    Olá pessoal, convido todos a assistirem a entrevista do especialista de Sparc e Solaris da Oracle. Serão abordados temas relevantes como a estratégia da Oracle para essa linha de produtos, roadmap e é claro, os benefícios de se usar a Red Stack.Alem disso terão 3 apresentações que vão detalhar melhor os temas Sparc, Solaris e Integração de Hardware e Software. A entrevista estará disponivel a partir do dia 20 de Setembro de 2012 no link http://www.voit.com.br/NL/Oracle_SPARC/webinar_sparc.htm.

    Read the article

  • You're invited : Oracle Solaris Forum, June 19th, Petah Tikva

    - by Frederic Pariente
    The local ISV Engineering will be attending and speaking at the Oracle and ilOUG Solaris Forum next week in Israel. Come meet us there! This free event requires registration, thanks. YOU'RE INVITED Oracle Solaris Forum Date : Tuesday, June 19th, 2012 Time : 14:00 Location :  Dan Academic CenterPetach TikvaIsrael Agenda : Enterprise Manager OPS Center and Oracle Exalogic Elastic CloudSolaris 11NetworkingCustomer Case Study : BMCOpen Systems Curriculum See you there!

    Read the article

  • On Solaris, how do you mount a second zfs system disk for diagnostics?

    - by Matt Ball
    (Cross posted from Stack Overflow 1) I've got two hard disks in my computer, and have installed Solaris 10u8 on the first and Opensolaris 2010.3 (dev onnv_134) on the second. Both systems uses ZFS and were independently created with a zpool name of 'rpool'. While running Solaris 10u8 on the first disk, how do I mount the second ZFS hard disk (at /dev/dsk/c1d1s0) on an arbitrary mount point (like /a) for diagnostics?

    Read the article

  • Join Companies in Web and Telecoms by Adopting MySQL Cluster

    - by Antoinette O'Sullivan
    Join Web and Telecom companies who have adopted MySQL Cluster to facilitate application in the following areas: Web: High volume OLTP eCommerce User profile management Session management and caching Content management On-line gaming Telecoms: Subscriber databases (HLR/HSS) Service deliver platforms VAS: VoIP, IPTV and VoD Mobile content delivery Mobile payments LTE access To come up to speed on MySQL Cluster, take the 3-day MySQL Cluster training course. Events already on the schedule include:  Location  Date  Delivery Language  Berlin, Germany  16 December 2013  German  Munich, Germany  2 December 2013  German  Budapest, Hungary  4 December 2013  Hungarian  Madrid, Spain  9 December 2013  Spanish  Jakarta Barat, Indonesia  27 January 2014  English  Singapore  20 December 2013  English  Bangkok, Thailand  28 January 2014  English  San Francisco, CA, United States  28 May 2014  English  New York, NY, United States  17 December 2013  English For more information about this course or to request an additional event, go to the MySQL Curriculum Page (http://education.oracle.com/mysql).

    Read the article

  • CVE-2011-0419 Denial of Service (DoS) vulnerability in Solaris C Library

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-0419 Denial of Service (DoS) vulnerability 4.3 C Library (libc) Solaris 10 SPARC: 147713-01 X86: 147714-01 Solaris 9 Contact Support This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >