Search Results

Search found 7229 results on 290 pages for 'solaris tips'.

Page 13/290 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • external SCSI tape drive DAT 72 problem Solaris 10

    - by Hassan
    Hi all, I have solaris 10 sparc running and working very well but i have problem with external SCSI tape drive DAT 72 problem it seems to me the tape drive is manufactured by SUN microsystems when i ran mt -f /dev/rmt/0 status it reveals the following output bash-3.00# mt -f /dev/rmt/0 status /dev/rmt/0: No such file or directory when i ran ls -l it reveals the following output ls -l /dev/rmt/0 lrwxrwxrwx 1 root root 43 Sep 20 2006 /dev/rmt/0 -> ../../devices/pci@8,600000/scsi@1,1/st@3,0: it seems to me everything is okay SCSI cable is connected properly to Tape device and to server as well the tape has SCSI termination dongle as well and connected properly to Tape device as well any ideas would be a great assist Thanks in advance

    Read the article

  • Finding throuput of CPU and Hardrive on Solaris

    - by Jim
    How do i find the throughput of a CPU and the Hardisk on an open solaris machine. Using MPstat or iostat. I'm having a hard time identifying the throughput if it is given at all in the commands output. Eg. in mpstat there is very little explanation as to what the columns mean http://docs.sun.com/app/docs/doc/816-5166/mpstat-1m?l=en&a=view&q=syscl+mpstat I've been using the syscl column divided by time interval to find the throughput but to be honest i have no idea what a system call truelly is. I'm trying to to analyze a hardrive and CPU while writing a file to the hardisk and when at rest Thanks in advance. Jim

    Read the article

  • Solaris cc segfaults when compiling rsync on intel platform

    - by PP
    I am trying to compile rsync-3.0.7 on Solaris 5.10 on an Intel chipset. When running ./configure I see the following (obviously erroneous lines): checking size of int... 0 checking size of long... 0 checking size of long long... 0 checking size of short... 0 checking size of int16_t... 0 checking size of uint16_t... 0 In config.log I see the following lines: configure.sh:5448: /tool/sunstudio12.1/bin/cc -xc99=all -o conftest -g -DHAVE_CONFIG_H conftest.c >&5 "conftest.c", line 123: warning: statement not reached cc: Fatal error in cc : Segmentation Fault configure.sh:5448: $? = 1 configure.sh: program exited with status 1 Segmentation fault? What could be causing a simple test script to segfault during compilation?

    Read the article

  • How to Automatically Create Home Directory for Active Directory User on Solaris (using PowerBroker)

    - by neildeadman
    I have a number of existing users in Active Directory that need a home directory created. They don't log directly in to Solaris but into a service running on that box. If I login as them their home directory gets created and then they can login. This is the same for new users too! As there are a lot of users, I need a way to automate this so new users and existing users have it created automatically. Is this possible??

    Read the article

  • Symmetrix gatekeepers on Solaris 10

    - by Milner
    I have some Solaris machines that are connected to EMC Symmetrix for SAN storage. Apparently the Symm has a gatekeeper device that is used with the symmetrix CLI. We don't need the CLI, but I have these gatekeeper devices that constantly fill /var/adm/messages and the like with corrupt label errors. Is there anything I can do (short of deleting the devices on machine start) to get rid of them? Or should I just try to get our SAN guy to get the installer for the CLI? These things are getting annoying, and the devfsadmd daemon keeps rediscovering them on boot.

    Read the article

  • Solaris mounting partitions

    - by Benco
    I'm trying to mount a partition in solaris 10... bash-3.00# mount /dev/dsk/c0t0d0s3 /data mount: /dev/dsk/c0t0d0s3 is already mounted or /data is busy As far as I know c0t0d0s3 isn't already mounted elsewhere, so what's really going on here? From /etc/mnttab : /dev/dsk/c1t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=7800001285811136 /devices /devices devfs dev=4840000 1285811125 ctfs /system/contract ctfs dev=48c0001 1285811125 proc /proc proc dev=4880000 1285811125 mnttab /etc/mnttab mntfs dev=4900001 1285811125 swap /etc/svc/volatile tmpfs xattr,dev=4940001 1285811125 objfs /system/object objfs dev=4980001 1285811125 sharefs /etc/dfs/sharetab sharefs dev=49c0001 1285811125 /usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=780000 1285811131 fd /dev/fd fd rw,dev=4b40001 1285811136 swap /tmp tmpfs xattr,dev=4940002 1285811137 swap /var/run tmpfs xattr,dev=4940003 1285811137 -hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=4c00001 1285811148 auto_home /home autofs indirect,ignore,nobrowse,dev=4c00002 1285811148 cordb:vold(pid530) /vol nfs ignore,noquota,dev=4bc0001 1285811149 I suspect the problem is not related to the mount point, but rather the disk slice I'm trying to mount: bash-3.00# newfs -v /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3: Device busy

    Read the article

  • linux/solaris kill many proccess with one command

    - by yael
    Is it possible to kill all find process with one command? I do not want to kill each process as kill -9 25295 , kill -9 11994 , etc.. Rather, what I want is a simple way or command that kill all find process (my target is to perfrom this action on linux and solaris machines). $ ps -ef | grep find root 25295 25290 0 08:59:59 pts/1 0:01 find /etc -type f -exec grep -l 100.106.23.152 {} ; -print root 11994 26144 0 09:04:18 pts/1 0:00 find /etc -type f -exec grep -l 100.106.23.153 {} ; -print root 25366 25356 0 08:59:59 pts/1 0:01 find /etc -type f -exec grep -l 100.106.23.154 {} ; -print root 26703 26658 0 09:00:05 pts/1 0:01 find /etc -type f -exec grep -l 100.106.23.155 {} ; -print

    Read the article

  • Solaris x86: Non working keys on keyboard (<, >, #, |)

    - by Thomas
    Hello everyone, I have a new installation of Solaris 10 10/09 on x86 hardware. The attached keyboard has a normal German layout. The system is configured accordingly by 'kbd -s'. The generic keys (letter, number, umlaut) work fine. Unfortunately some keys like <, , | or # do not. They produce no output on the text console at all. I tried PS/2 and USB keyboards. I cannot test it under X11 as it is currently not working. Thanks.

    Read the article

  • kernel warning disk error for command write - solaris svm

    - by help_me
    Recently this warning came up on my message logs, scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0): Oct 27 00:14:44 Error for Command: write(10) Error Level:Retryable Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Requested Block: 101515828 Error Block: 101515828 Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Vendor: SEAGATE Serial Number: 0441B9B5H Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Sense Key: Hardware Error Oct 27 00:14:44 scsi: [ID 107833 kern.notice] ASC: 0x19 (defect list error), ASCQ: 0x0, FRU: 0x2 This is showing signs of disk failing in my opinion. I have not seen the messages re-occurring. This is on a Solaris 9 Sparc system V240. The disks are managed by SVM and "metadb" is showing the flags as "a" Are there any tests or indications as to check/see if the disk is actually failing or was that error message initiated by something else. Thank you!

    Read the article

  • Setting max_allowed_packet for mysql on solaris 10

    - by Drakonen
    I want to set the max_allowed_packet setting for mysql (5.1.31) which is running on Solaris 10. Unfortunately mysql does not seem to read the my.cfg. I tried to place it in /etc/mycfg, /opt/mysql/mysql/data/my.cfg and in /opt/mysql/mysql/support-files/my.cfg. At each of these locations, the max_allowed_packet does not get set when i check with: `select @@max_allowed_packet;` When I start mysqld as such it does set the setting: # su mysql $ mysqld --defaults-file=/etc/my.cfg This are the contents of my.cfg: [mysqld] max_allowed_packet = 50M How can i make mysql read the config when i start it with the SMF tools?

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • join ZFS/Solaris to windows AD 2003/2008 domain

    - by user95587
    I have a client trying to join his newly updated ZFS/Solaris box to my Windows AD 2003/2008 domain. Here is the command he is using and the error he is getting; Console: root@xxx:/etc/inet# smbadm join -u USER DOMAIN After joining DOMAIN the smb service will be restarted automatically.Would you like to continue? [no]: yes Enter domain password: Joining DOMAIN ... this may take a minute ... failed to join DOMAIN: UNSUCCESSFUL Please refer to the system log for more information. From /var/adm/messages: Sep 22 10:12:00 xxx smbd[593]: [ID 702911 daemon.error] smbrdr_exchange[116]: failed (-3) Sep 22 10:12:01 xxx smbd[593]: [ID 232655 daemon.notice] ldap_modify: Insufficient access Sep 22 10:12:01 xxx smbd[593]: [ID 898201 daemon.notice] Unable to set the TRUSTED_FOR_DELEGATION userAccountControl flag on the machine account in Active Directory. Please refer to the Troubleshooting guide for more information. Sep 22 10:12:01 xxx smbd[593]: [ID 526780 daemon.notice] Failed to establish NETLOGON credential chain Sep 22 10:12:01 xxx smbd[593]: [ID 871254 daemon.error] smbd: failed joining DOMAIN (UNSUCCESSFUL)

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Linux/Solaris replace hostnames in files according to hostname rule

    - by yael
    According to the following Perl command ( this command part of ksh script ) I can replaced old hostnames with new hostnames in Linux or Solaris previos_machine_name=linux1a new_machine_name=Red_Hat_linux1a export previos_machine_name export new_machine_name . perl -i -pe 'next if /^ *#/; s/(\b|[[:^alnum:]])$ENV{previos_machine_name}(\b|[[:^alnum:]])/$1$ENV{new_machine_name}$2/g' file EXPLAIN: according to perl command - we not replaces hostnames on the follwoing case: RULE: [NUMBERS]||[letter]HOSTNAME[NUMBERS]||[letter] my question after I used the Perl command in order to replace all old hostnames with new hostnames based on the "RULE" in the Perl command how to verify that the old hostnames not exist in file ? for example previos_machine_name=linux1a new_machine_name=Red_Hat_linux1a more file AAARed_Hat_linux1a verification should be ignore from this line @Red_Hat_linux1a$ verification should be match this line P=Red_Hat_linux1a verification should be match this line XXXRed_Hat_linux1aZZZ verification should be ignore from this line . . . .

    Read the article

  • Associating File Types with AutoVue Desktop Deployment

    - by [email protected]
    Windows users take for granted that when they double click on a document or design, that it will open up in its application automatically. One of the questions I'm commonly asked is "How can I get the same behavior with AutoVue Desktop Deployment?". It's pretty easy, but there are a few tricks to doing it. Step 1: Download new jvue_direct.bat and icon The first thing you'll need to do is download a slightly modified version of jvue_direct.bat. You can find it here (Document 1075784.1) on Oracle's Support Portal. You also want to download the AV.ico file. This is the icon that will be used for all file types associated with AutoVue. Place both of these files in your <AutoVueInstallDirectory>\bin directory. Step 2: Associate File Types With AutoVue There are two ways to do this. You can do this through the Windows user interface, or you can set up a batch file to do this. Associating File Types Through Windows The way most people associate file types to an application is using the Windows user interface. You've probably tried to open a file type that Windows doesn't recognize and seen this window pop up: Although you can use this dialog to associate that file type with AutoVue, I don't recommend it. I much prefer using a batch file to associate file types with AutoVue. Associating File Types Using A Batch File There are a few good reasons to associate file types using a batch file instead of using the pop-up dialog method: If you have several file types to associate with AutoVue, it's much easier to use a batch file to do them all at once. Doing it through the Windows user interface requires having files of each type available. Using a batch file doesn't require having the files you're associating. Associating file types through the dialog may work well for one person, but what if you're an administrator doing an enterprise wide deployment of AutoVue Desktop Deployment for several hundred users? You don't want to do this manually for each user. You can have one simple batch file that's run on each user's PC to set up all the file types. You can easily associate an icon with the file types you're opening with AutoVue. To use the batch file method follow these steps: Create a file called filetype.bat using a text editor and copy and paste the following into it: @assoc .dwg=AVFile @assoc .jpg=AVFile @assoc .doc=AVFile @ftype AVFile="%~dp0jvue_direct.bat" "%%1" @reg add HKEY_CLASSES_ROOT\AVFile\DefaultIcon /v "" /f /d "%~dp0AV.ico" Change the lines starting with @assoc. Each of these lines associates a file extension with AutoVue. You can have as many @assoc lines as you want. Save this file in your <AutoVueInstallDirectory>\bin directory. Double click this file, or run it from a command prompt. Restart Windows to get the icons to show up. How Does This Work? The first three lines are creating a file type called AVFile. We are associating the extensions .dwg, .jpg, and .doc with this file type. You will want to change these lines when creating your own batch file. For example, to associate Microstation designs, which have extension .dgn, you should delete the @assoc lines above and add the line: @assoc .dgn=AVfile The line beginning with @ftype tells Windows that all AVFile type files should be opened using AutoVue Desktop Deployment. The final line associates the AutoVue icon with these file types. You may need to restart Windows to see the new icons. Warning: One Size Doesn't Fit All When deciding which file types should be associated with AutoVue, remember that there are different types of users using it. Your engineers may be pretty surprised to find that after installing AutoVue, double clicking their .dwg file opens up AutoVue instead of AutoCAD. If you have more than one type of AutoVue user, make sure you've considered what file types each user group will and will not want to be associated with AutoVue. If necessary, create a separate file association batch file for each user type. So that's it. In two simple steps you can double click your favorite designs and have them open automatically in AutoVue Desktop Deployment. I'd love to hear how are you using AutoVue Desktop Deployment. What other deployment tips would you be interested in learning about?

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • How the OS Makes the Database Scream

    - by rickramsey
    source Few things are as satisfying as a screaming burnout. When Oracle Database engineers team up with Solaris engineers, they do a lot of them. Here are a few of the reasons why. Article: How the OS Makes the Database Fast - Oracle Solaris For applications that rely on Oracle Database, a high-performance operating system translates into faster transactions, better scalability to support more users, and the ability to support larger capacity databases. When deployed in virtualized environments, multiple Oracle Database servers can be consolidated on the same physical server. Ginny Henningsen describes what Oracle Solaris does to make the Oracle database run faster. Interview: Why Is The OS Still Relevant? In a world of increasing virtualization and growing interest in cloud services, why is the OS still relevant? Michael Palmeter, senior director of Oracle Solaris, explains why it's not only relevant, but essential for data centers that care about performance. Interview: An Engineer's Perspective: Why the OS Is Still Relevant Sysadmins are handling hundreds or perhaps thousands of VM's. What is it about Solaris that makes it such a good platform for managing those VM's? Liane Praza, senior engineer in the Solaris core engineering group provides an engineer's perspective. Interview in the Lab: How to Get the Performance Promised by Oracle's T5 SPARC Chips If you want your applications to run on the new SPARC T5/M5 chips, how do you make sure they use all that new performance? Don Kretsch, Senior Director of Engineering, explains. Interview: Why Oracle Database Engineering Uses Oracle Solaris Studio The design priorities for Oracle Solaris Studio are performance, observability, and productivity. Why this is good for ISV's and developers, and why it's so important to the Oracle database engineering team. Taped in Oct 2012. - Rick Follow me on: Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Oracle Sun Solaris 11.1 Completes EAL4+ Common Criteria Evaluation

    - by Joshua Brickman-Oracle
    Oracle is pleased to announce that the Oracle Solaris 11.1 operating system has achieved a Common Criteria certification at Evaluation Assurance Level (EAL) 4 augmented by Flaw Remediation under the Canadian Communications Security Establishment’s (CSEC) Canadian Common Criteria  Scheme (CCCS).  EAL4 is the highest level achievable for commercial software, and is the highest level mutually recognized by 26 countries under the current Common Criteria Recognition Arrangement (CCRA).  Oracle Solaris 11.1 is conformant to the BSI Operating System Protection Profile v2.0 with the following four extended packages. (1) Advanced Management, (2) Extended Identification and Authentication, (3) Labeled Security, and (4) Virtualization. Common Criteria is an international framework (ISO/IEC 15408) which defines a common approach for evaluating security features and capabilities of Information Technology security products. A certified product is one that a recognized Certification Body asserts as having been evaluated by a qualified, accredited, and independent evaluation laboratory competent in the field of IT security evaluation to the requirements of the Common Criteria and Common Methodology for Information Technology Security Evaluation. Oracle Solaris is the industry’s most widely deployed UNIXtm operating system, delivers mission critical cloud infrastructure with built-in virtualization, simplified software lifecycle management, cloud scale data management, and advanced protection for public, private, and hybrid cloud environments. It provides a suite of technologies and applications that create an operating system with optimal performance. Oracle Solaris 11.1 includes key technologies such as Trusted Extensions, the Oracle Solaris Cryptographic Framework, Zones, the ZFS File System, Image Packaging System (IPS), and multiple boot environments. The Oracle Solaris 11.1 Certification Report and Security Target can be viewed on the Communications Security Establishment Canada (CSEC) site and on the Common Criteria Portal. For more information on Oracle’s participation in the Common Criteria program, please visit the main Common Criteria information page here: (http://www.oracle.com/technetwork/topics/security/oracle-common-criteria-095703.html) For a complete list of Oracle products with Common Criteria certifications and FIPS 140-2 validations, please see the Security Evaluations website here: (http://www.oracle.com/technetwork/topics/security/security-evaluations-099357.html).

    Read the article

  • Register Today for Upcoming Oracle Solaris Events!

    - by Terri Wischmann
    Don't miss out on the exciting upcoming events around Oracle Solaris 11!  Register today for one or all of them - Check out the events below and Register Today! Please join us for the next Oracle Solaris Developer Webinar: "Simplify Your Development Environment with Zones, ZFS & More" on 04/10 @ 9am PT by Eric Reid (Principal Software Engineer) and Stefan Schneider (Chief Technologist ISV-Engineering) Register Now! Check out the upcoming Free OTN Sys Admin Day on April 10th on the Oracle Santa Clara Campus. Full Day of Hands on Labs Training, Demos, and Presentations.  Come learn about Oracle Solaris 11, Oracle Solaris Studio, Oracle Technology Network and Oracle Enterprise Linux! Register Now! Attend the Oracle Solaris 11 Technical Track at the NLUUG Conference in The Netherlands: April 11th, 2012  - This year, the conference will focus on Operating System innovations. Come learn about the innovations Oracle Solaris 11 brings, with technical deep-dive talks presented by Oracle experts. For more information including the agenda click here

    Read the article

  • Oracle E-Business Suite Release 12.1 Certified on Solaris 11

    - by John Abraham
    Oracle Solaris 11 was announced last week, and I'm pleased to also announce that Oracle E-Business Suite Release 12.1 is now certified on Oracle Solaris on SPARC (64-bit). This new operating system release represents a culmination of years of hard work by our Solaris engineering group.  It has a number of new and advanced features including simplified deployment and lifecycle management tools, built-in certified virtualization technologies, support on the latest generation SPARC chips, and more. New installations of the E-Business Suite R12 on this platform will require version 12.1.1 or higher and the latest Rapid Install startCD version 12.1.1.13.  For existing 12.1 installations, we have also certified an "in place" OS upgrade or the use of cloning to a target Solaris 11 system. There are also specific requirements to upgrade technology components such as the Oracle Database and Fusion Middleware.  These requirements are noted in the links below. References Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Oracle Solaris on SPARC (64-bit) (My Oracle Support Document 761568.1) Oracle Database Installation Guide 11g Release 2 (11.2) for Oracle Solaris Interoperability Notes Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (My Oracle Support Document 1058763.1) Cloning Oracle Applications Release 12 with Rapid Clone (My Oracle Support Document 406982.1) Related Articles New Rapid Install StartCD (12.1.1.13) for Oracle E-Business Suite Release 12.1 Now Available Oracle E-Business Suite Release 12.1.3 Now Available

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • Solaris: detect hotswap SATA disk insert

    - by growse
    What's the method used on Solaris to get the system to rescan for new disks that have been hot-plugged on a SATA controller? I've got an HP X1600 NAS which had 9 drives configred in a ZFS pool. I've added 3 disks, but the format command still only shows the original 9. When I plugged them in, I saw this: cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=12 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=11 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=10 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO But can't figure out how to get the format command to see them so I know they've been detected by the system.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >