Search Results

Search found 3190 results on 128 pages for 'cullen sun'.

Page 37/128 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • JBoss thrown error looking up local address when starting

    - by Jeeba
    Hello im installing a fresh Jboss 5.1 in a Centos 5.5 machine. I dont have Apache installed. So when I try to start jboss using the comand ./run.sh I get the following error 15:13:57,414 INFO [JMXKernel] Legacy JMX core initialized 15:14:03,856 ERROR [ServerInfo] Error looking up local address java.net.UnknownHostException: dhcppc1: dhcppc1 at java.net.InetAddress.getLocalHost(InetAddress.java:1354) at org.jboss.system.server.ServerInfo.getHostAddress(ServerInfo.java:364) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) .... After that i can run Jboss only from 127.0.0.1:8080 but using localhost:8080 doesnt work. I think its a centos configuration problem but im a total newbye managing ports and maybe firewalls, so what do you think could be the problem?

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • JVM memory initializazion error after windows update

    - by gianni
    We have three Windows Server 2003 with 2 GB RAM. Server1 tomcat 5.5.25 jvm version SUN 1.6.0_11-b03 Server2 tomcat 5.5.25 jvm version SUN 1.6.0_14-b08 Server3 tomcat 6.0.18 jvm version SUN 1.6.0_14-b08 For the three servers JVM parameters are: -XX:MaxPermSize=256m -Dcatalina.base=C:\Programmi\Apache Group\apache-tomcat-5.5.25 -Dcatalina.home=C:\Programmi\Apache Group\apache-tomcat-5.5.25 -Djava.endorsed.dirs=C:\Programmi\Apache Group\apache-tomcat-5.5.25\common\endorsed -Djava.io.tmpdir=C:\Programmi\Apache Group\apache-tomcat-5.5.25\temp vfprintf -Xms512m -Xmx1024m For some months everithing worked fine. Last friday we installed some windows updates. After the reboot tomcat doesnt start with error: Error occurred during initialization of VM Could not reserve enough space for object heap We reduced the parameter -Xmx1024m to -Xmx768m and now tomcat starts. But we need greater max heap size What happened to our servers ? Thanks in advance.

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • Error installing Java on Ubuntu Server

    - by Camran
    I get this error almost when installation is finished: /proc is not mounted; some java apps may fail Could not create the Java virtual machine. Error occurred during initialization of VM Could not reserve enough space for object heap Ignoring error generating classes.jsa Why is this? I just entered sudo apt-get install sun-java6-jre sun-java6-plugin sun-java6-fonts Is there something I must do first? What is Proc? If you need more input let me know. Thanks

    Read the article

  • Converted VmWare Image does not boot in Virtual Box

    - by vBox Question
    I have a .vddx virtual image which boots in VmWare, but I'm having trouble getting it to work with Sun Virtual Box. In Sun Virtual Box, I created a new Virtual Machine and pointed it at the vddx file from VmWare. When I try to boot the virtual machine, Sun Virtual Box says that the volume is not bootable. VmWare is able to boot from this virtual machine. Does anyone have any suggestions about what might be causing the problem? Is there a conversion utility that I need to run? Any debugging options that I could turn on?

    Read the article

  • Tomcat 6, JPA and Data sources

    - by asrijaal
    Hi there, I'm trying to get a data source working in my jsf app. I defined the data source in my web-apps context.xml webapp/META-INF/context.xml <?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="/Sale"> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="20" maxIdle="10" maxWait="-1" name="Sale" password="admin" type="javax.sql.DataSource" url="jdbc:mysql://localhost/sale" username="admin"/> </Context> web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/welcomeJSF.jsp</welcome-file> </welcome-file-list> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>ruby</param-value> </context-param> </web-app> and my persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="SalePU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <non-jta-data-source>Sale</non-jta-data-source> <class>org.comp.sale.AnfrageAnhang</class> <class>org.comp.sale.Beschaffung</class> <class>org.comp.sale.Konto</class> <class>org.comp.sale.Anfrage</class> <exclude-unlisted-classes>false</exclude-unlisted-classes> </persistence-unit> </persistence> But no data source seems to be created by Tomcat, I only get this exception Exception [TOPLINK-7060] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: Cannot acquire data source [Sale]. Internal Exception: javax.naming.NameNotFoundException: Name Sale is not bound in this Context The needed jars for the MySQL driver are included into the WEB-INF/lib dir. What I'm doing wrong?

    Read the article

  • Cannot connect to MySQL with JDBC - Connection Timeout - Ubuntu 9.04

    - by gav
    I am running Ubuntu and am ultimately trying to connect Tomcat to my MySQL database using JDBC. It has worked previously but after a reboot the instance now fails to connect. Both Tomcat 6 and MySQL 5.0.75 are on the same machine Connection string: jdbc:mysql:///localhost:3306 I can connect to MySQL on the command line using the mysql command The my.cnf file is pretty standard (Available on request) has bind address: 127.0.0.1 I cannot Telnet to the MySQL port despite netstat saying MySQL is listening I have one IpTables rule to forward 80 - 8080 and no firewall I'm aware of. I'm pretty new to this and I'm not sure what else to test. I don't know whether I should be looking in etc/interfaces and if I did what to look for. It's weird because it used to work but after a reboot it's down so I must have changed something.... :). I realise a timeout indicates the server is not responding and I assume it's because the request isn't actually getting through. I installed MySQL via apt-get and Tomcat manually. MySqld processes root@88:/var/log/mysql# ps -ef | grep mysqld root 21753 1 0 May27 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe mysql 21792 21753 0 May27 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 21793 21753 0 May27 ? 00:00:00 logger -p daemon.err -t mysqld_safe -i -t mysqld root 21888 13676 0 11:23 pts/1 00:00:00 grep mysqld Netstat root@88:/var/log/mysql# netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 21792/mysqld unix 2 [ ACC ] STREAM LISTENING 1926205077 21792/mysqld /var/run/mysqld/mysqld.sock Toy Connection Class root@88:~# cat TestConnect/TestConnection.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class TestConnection { public static void main(String args[]) throws Exception { Connection con = null; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); System.out.println("Got driver"); con = DriverManager.getConnection( "jdbc:mysql:///localhost:3306", "uname", "pass"); System.out.println("Got connection"); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } finally { if(con != null) con.close(); } } } Toy Connection Class Output Note: This is the same error I get from Tomcat. root@88:~/TestConnect# java -cp mysql-connector-java-5.1.12-bin.jar:. TestConnection Got driver Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 1 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at TestConnection.main(TestConnection.java:14) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) ... 12 more Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ... 13 more Telnet Output root@88:~/TestConnect# telnet localhost 3306 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out

    Read the article

  • Why does File::Slurp return a scalar when it should return a list?

    - by BrianH
    I am new to the File::Slurp module, and on my first test with it, it was not giving the results I was expecting. It took me a while to figure it out, so now I am interested in why I was seeing this certain behavior. My call to File::Slurp looked like this: my @array = read_file( $file ) || die "Cannot read $file\n"; I included the "die" part because I am used to doing that when opening files. My @array would always end up with the entire contents of the file in the first element of the array. Finally I took out the "|| die" section, and it started working as I expected. Here is an example to illustrate: perl -de0 Loading DB routines from perl5db.pl version 1.22 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(-e:1): 0 DB<1> use File::Slurp DB<2> $file = '/usr/java6_64/copyright' DB<3> x @array1 = read_file( $file ) 0 'Licensed material - Property of IBM.' 1 'IBM(R) SDK, Java(TM) Technology Edition, Version 6' 2 'IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6' 3 '' 4 'Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved.' 5 'Copyright IBM Corporation, 1998, 2009. All rights reserved.' 6 '' 7 'The Apache Software License, Version 1.1 and Version 2.0' 8 'Copyright 1999-2007 The Apache Software Foundation. All rights reserved.' 9 '' 10 'Other copyright acknowledgements can be found in the Notices file.' 11 '' 12 'The Java technology is owned and exclusively licensed by Sun Microsystems Inc.' 13 'Java and all Java-based trademarks and logos are trademarks or registered' 14 'trademarks of Sun Microsystems Inc. in the United States and other countries.' 15 '' 16 'US Govt Users Restricted Rights - Use duplication or disclosure' 17 'restricted by GSA ADP Schedule Contract with IBM Corp.' DB<4> x @array2 = read_file( $file ) || die "Cannot read $file\n"; 0 'Licensed material - Property of IBM. IBM(R) SDK, Java(TM) Technology Edition, Version 6 IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6 Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved. Copyright IBM Corporation, 1998, 2009. All rights reserved. The Apache Software License, Version 1.1 and Version 2.0 Copyright 1999-2007 The Apache Software Foundation. All rights reserved. Other copyright acknowledgements can be found in the Notices file. The Java technology is owned and exclusively licensed by Sun Microsystems Inc. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems Inc. in the United States and other countries. US Govt Users Restricted Rights - Use duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ' Why does the || die make a difference? I have a feeling this might be more of a Perl precedence question instead of a File::Slurp question. I looked in the File::Slurp module and it looks like it is set to croak if there is a problem, so I guess the proper way to do it is to allow File::Slurp to croak for you. Now I'm just curious why I was seeing these differences.

    Read the article

  • Does JUnit4 testclasses require a public no arg constructor?

    - by Thomas Baun
    I have a test class, written in JUnit4 syntax, that can be run in eclipse with the "run as junit test" option without failing. When I run the same test via an ant target I get this error: java.lang.Exception: Test class should have public zero-argument constructor at org.junit.internal.runners.MethodValidator.validateNoArgConstructor(MethodValidator.java:54) at org.junit.internal.runners.MethodValidator.validateAllMethods(MethodValidator.java:39) at org.junit.internal.runners.TestClassRunner.validate(TestClassRunner.java:33) at org.junit.internal.runners.TestClassRunner.<init>(TestClassRunner.java:27) at org.junit.internal.runners.TestClassRunner.<init>(TestClassRunner.java:20) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26) at junit.framework.JUnit4TestAdapter.<init>(JUnit4TestAdapter.java:24) at junit.framework.JUnit4TestAdapter.<init>(JUnit4TestAdapter.java:17) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:386) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) Caused by: java.lang.NoSuchMethodException: dk.gensam.gaia.business.bonusregulering.TestBonusregulerAftale$Test1Reader.<init>() at java.lang.Class.getConstructor0(Class.java:2706) at java.lang.Class.getConstructor(Class.java:1657) at org.junit.internal.runners.MethodValidator.validateNoArgConstructor(MethodValidator.java:52) I have no public no arg constructor in the class, but is this really necessary? This is my ant target <target name="junit" description="Execute unit tests" depends="compile, jar-test"> <delete dir="tmp/rawtestoutput"/> <delete dir="test-reports"/> <mkdir dir="tmp/rawtestoutput"/> <junit printsummary="true" failureproperty="junit.failure" fork="true"> <classpath refid="class.path.test"/> <classpath refid="class.path.model"/> <classpath refid="class.path.gui"/> <classpath refid="class.path.jfreereport"/> <classpath path="tmp/${test.jar}"></classpath> <batchtest todir="tmp/rawtestoutput"> <fileset dir="${build}/test"> <include name="**/*Test.class" /> <include name="**/Test*.class" /> </fileset> </batchtest> </junit> <junitreport todir="tmp"> <fileset dir="tmp/rawtestoutput"/> <report todir="test-reports"/> </junitreport> <fail if="junit. failure" message="Unit test(s) failed. See reports!"/> </target> The test class have no constructors, but it has an inner class with default modifier. It also have an anonymouse inner class. Both inner classes gives the "Test class should have public zero-argument constructor error". I am using Ant version 1.7.1 and JUnit 4.7

    Read the article

  • Why do I get error "prefix [..] is not defined" when I try to use my jsf custom tag?

    - by Roman
    I created a jsf custom tag (I'm not sure that it's correct, I could miss something easily, so I attached code below). Now I'm trying to use this tag but I get an error: error on line 28 at column 49: Namespace prefix gc on ganttchart is not defined So, here is the xhtml-page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:gc="http://myganttchart.org"> <body> <ui:composition template="/masterpage.xhtml"> <ui:define name="title">Gantt chart test</ui:define> <ui:define name="content"> <f:view> <gc:ganttchart width="300" height="100" rendered="true"/> ... </f:view> </ui:define> </ui:composition> </body> </html> And here is tld-file (it's placed in WEB-INF/): <taglib xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.1"> <tlib-version> 1.0 </tlib-version> <short-name> oext </short-name> <uri> http://myganttchart.org </uri> <tag> <name>ganttchart</name> <tag-class>usermanagement.support.ganttchart.GanttChartTag</tag-class> <body-content>empty</body-content> <attribute> <name>binding</name> <deferred-value> <type>javax.faces.component.UIComponent</type> </deferred-value> </attribute> ... </tag> </tablib> Here is a part of tag-class code: public class GanttChartTag extends UIComponentELTag { private ValueExpression width; private ValueExpression height; private ValueExpression styleClass; public String getComponentType () { return "org.myganttchart"; } public String getRendererType () { return null; } ... } Correspondent block from faces-config: <component> <component-type>org.myganttchart</component-type> <component-class>usermanagement.support.ganttchart.UIGanttChart</component-class> </component> And the last part if UIGanttChart: public class UIGanttChart extends UIOutput { public UIGanttChart() { setRendererType (null); } //some test code public void encodeBegin(FacesContext context) throws IOException { ResponseWriter writer = context.getResponseWriter (); writer.startElement("img", this); writer.writeAttribute("src", "no-img", "source"); writer.writeAttribute("width", getAttributes ().get ("width"), "width"); writer.writeAttribute("height", getAttributes ().get ("height"), "height"); writer.writeAttribute("class", ".someclass", "styleClass"); writer.endElement("img"); } } So, what did I miss?

    Read the article

  • C++ std::vector insert segfault

    - by ArunSaha
    I am writing a test program to understand vector's better. In one of the scenarios, I am trying to insert a value into the vector at a specified position. The code compiles clean. However, on execution, it is throwing a Segmentation Fault from the v8.insert(..) line (see code below). I am confused. Can somebody point me to what is wrong in my code? #define UNIT_TEST(x) assert(x) #define ENSURE(x) assert(x) #include <vector> typedef std::vector< int > intVector; typedef std::vector< int >::iterator intVectorIterator; typedef std::vector< int >::const_iterator intVectorConstIterator; intVectorIterator find( intVector v, int key ); void test_insert(); intVectorIterator find( intVector v, int key ) { for( intVectorIterator it = v.begin(); it != v.end(); ++it ) { if( *it == key ) { return it; } } return v.end(); } void test_insert() { const int values[] = {10, 20, 30, 40, 50}; const size_t valuesLength = sizeof( values ) / sizeof( values[ 0 ] ); size_t index = 0; const int insertValue = 5; intVector v8; for( index = 0; index < valuesLength; ++index ) { v8.push_back( values[ index ] ); } ENSURE( v8.size() == valuesLength ); for( index = 0; index < valuesLength; ++index ) { printf( "index = %u\n", index ); intVectorIterator it = find( v8, values[ index ] ); ENSURE( it != v8.end() ); ENSURE( *it == values[ index ] ); // intVectorIterator itToInsertedItem = v8.insert( it, insertValue ); // line 51 // UNIT_TEST( *itToInsertedItem == insertValue ); } } int main() { test_insert(); return 0; } $ ./a.out index = 0 Segmentation Fault (core dumped) (gdb) bt #0 0xff3a03ec in memmove () from /platform/SUNW,T5140/lib/libc_psr.so.1 #1 0x00012064 in std::__copy_move_backward<false, true, std::random_access_iterator_tag>::__copy_move_b<int> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:575 #2 0x00011f08 in std::__copy_move_backward_a<false, int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:595 #3 0x00011d00 in std::__copy_move_backward_a2<false, int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:605 #4 0x000119b8 in std::copy_backward<int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:640 #5 0x000113ac in std::vector<int, std::allocator<int> >::_M_insert_aux (this=0xffbfeba0, __position=..., __x=@0xffbfebac) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/vector.tcc:308 #6 0x00011120 in std::vector<int, std::allocator<int> >::insert (this=0xffbfeba0, __position=..., __x=@0xffbfebac) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/vector.tcc:126 #7 0x00010bc0 in test_insert () at vector_insert_test.cpp:51 #8 0x00010c48 in main () at vector_insert_test.cpp:58 (gdb) q

    Read the article

  • Setup database for Unit tests with Spring, Hibernate and Spring Transaction Support

    - by Michael Bulla
    I want to test integration of dao-layer with my service-layer in a unit-test. So I need to setup some data in my database (hsql). For this setup I need an own transaction at the begining of my testcase to ensure that all my setup is really commited to database before starting my testcase. So here's what I want to achieve: // NotTranactional public void doTest { // transaction begins setup database // commit transaction service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } Here is my not working code: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"/asynchUnit.xml"}) @DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD) public class ReceiptServiceTest implements ApplicationContextAware { @Autowired(required=true) private UserHome userHome; private ApplicationContext context; @Before @Transactional(propagation=Propagation.REQUIRED) public void init() throws Exception { User user = InitialCreator.createUser(); userHome.persist(user); } @Test public void testDoSomething() { ... } } Leading to this exception: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687) at de.diandan.asynch.modell.GenericHome.getSession(GenericHome.java:40) at de.diandan.asynch.modell.GenericHome.persist(GenericHome.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:318) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:196) at $Proxy28.persist(Unknown Source) at de.diandan.asynch.service.ReceiptServiceTest.init(ReceiptServiceTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) I dont know whats the right way to get the transaction around setup database. What I tried: @Before @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } - Spring seems not to start transaction in @Before-annotated methods. Beyond that, thats not what I really want, cause there are a lot merhods in my testclass which needs a slightly differnt setup, so I need several of that init-methods. @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } public void doTest { init(); service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- init seems not to get started in transaction What I dont want to do: public void doTest { // doing my own transaction-handling setup database // doing my own transaction-handling service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- start mixing springs transaction-handling and my own seems to get pain in the ass. @Transactional(propagation=Propagation.REQUIRED) public void doTest { setup database service.doStuff() } -- I want to test as real as possible situation, so my service should start with a clean session and no transaction opened So whats the right way to setup database for my testcase?

    Read the article

  • problem opening eclipse

    - by Marchosius
    I'm having a problem loading Eclipise in 12.04. I loaded the error log and this was inside: !SESSION 2012-09-03 16:52:09.742 ----------------------------------------------- eclipse.buildId=I20110613-1736 java.version=1.7.0_07 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_GB Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.osgi 4 0 2012-09-03 16:52:11.317 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-3740 in java.library.path no swt-gtk in java.library.path Can't load library: /home/marcel/.swt/lib/linux/x86_64/libswt-gtk-3740.so Can't load library: /home/marcel/.swt/lib/linux/x86_64/libswt-gtk.so at org.eclipse.swt.internal.Library.loadLibrary(Library.java:285) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:194) at org.eclipse.swt.internal.C.<clinit>(C.java:21) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:132) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:695) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:161) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:153) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:95) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:344) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:622) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:577) at org.eclipse.equinox.launcher.Main.run(Main.java:1410) at org.eclipse.equinox.launcher.Main.main(Main.java:1386) I had openJDK installed and to remove and replace with Oracle Java to install Aptana Studio. This thread explains it all. So now reinstalled OpenJDK, this might give some insight to the problem?

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • What should JavaScript be renamed to [closed]

    - by Evan Plaice
    Background: I have been watching Douglas Crockford's series of presentation about JavaScript History (which I highly recommend) lately and a one comment of his specifically piqued my attention. The trademark for 'JavaScript' is owned by Oracle History: Due to time constraints at Netscape, the language was literally written in weeks and released in very buggy form. To make it seem more appealing, Netscape picked JavaScript to appeal to the massively growing population of Java developers. Unfortunately, this pissed off Sun and stirred up a lot of controversy between the two organizations. At some point, they came to an agreement whereby Netscape was given permission to use the name as long as Sun owned the trademark. Some people incorrectly refer to JavaScript as ECMAScript because that's where the standard for the language is registered but, aside from it's current marketing-driven label, it doesn't really have a name. Fast Forward Sun goes down only to be swallowed by Oracle, who has no reservations about litigating for profit, now owns the name. So... If Oracle decides and forces JavaScript to take on a new name, what name would best represent the language?

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Ubuntu 10.04 - unable to install Arduino

    - by Newbie
    Hello! At the moment, I try to install Arduino on my Ubuntu 10.04 (32 Bit) computer. I downloaded the latest release at http://arduino.cc/en/Main/Software, cd'ed to the directory and unziped the package. When I try to run ./arduino , I get following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Base.main(Base.java:112) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(Unknown Source) ... 1 more Here is my java -version output: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK Server VM (build 19.0-b09, mixed mode) Any suggestions on this? I try to install arduino without the 'arduino' package. I tried to install it with apt-get (sudo apt-get install arduino). When I try to start arduino (using arduino command) will cause following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Preferences.load(Preferences.java:553) at processing.app.Preferences.load(Preferences.java:549) at processing.app.Preferences.init(Preferences.java:142) at processing.app.Base.main(Base.java:188) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(PApplet.java:224) ... 4 more Update: I saw that I installed several versions of jre (sun and open). So I uninstalled the open jre. Now, when calling arduino I get a new error: java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path thrown while loading gnu.io.RXTXCommDriver Exception in thread "main" java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at gnu.io.CommPortIdentifier.<clinit>(CommPortIdentifier.java:123) at processing.app.Editor.populateSerialMenu(Editor.java:965) at processing.app.Editor.buildToolsMenu(Editor.java:717) at processing.app.Editor.buildMenuBar(Editor.java:502) at processing.app.Editor.<init>(Editor.java:194) at processing.app.Base.handleOpen(Base.java:698) at processing.app.Base.handleOpen(Base.java:663) at processing.app.Base.handleNew(Base.java:578) at processing.app.Base.<init>(Base.java:318) at processing.app.Base.main(Base.java:207)

    Read the article

  • MAA a Database Machine-nel, maximális rendelkezésre állás

    - by Fekete Zoltán
    Néhány napja jelent meg egy, a maximális rendelkezésre állást boncolgató Oracle fehérpapír :): Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine. Ez a dokumentum az Exadata környezetben az Oracle Data Guard használatát elemzi. Az utolsó oldalakon néhány rendkívül hasznos linket is találunk. Mire is használható a Data Guard? - katasztrófa helyzet kezelése - adatbázis gördülo upgrade - egy megoldás az Exadata környezetre migrálásra - a standby adatbázis kihasználása A Sun Oracle Database Machine háromféle konfigurációban kapható: Full Rack, Half Rack és Quarter Rack, azaz teljes, fél és negyed szekrény kiépítésben. Felfelé upgrade-elheto és akár sok Full Rack összekapcsolva is egyetlen gépként tud muködni. A határ tehát a csillagos ég! :) Hiszen a nap a legfontosabb csillagunk. A Database Machine már önmagában is magas rendelkezésreállást biztosít, hiszen minden - a muködés szempontjából fontos - minden komponense legalább duplikált! Természetesen ez az adatokra is vonatkozik. A Database Machine ideális gyors környezet mind OLTP, mind DW futtatására, mind adatbázis konszolidációra. A tranzakciós (OLTP) rendszereknél régóta fontos követelmény, hogy az elsodleges site mögött legyen egy katasztrófa site, mely át tudja venni az adatbázis-kezelés feladatát, ha árvíz, tuz, vagy más szomorú katasztrófa történne az elsodleges site-on. Manapság már az adattárházak (DW) üzemeltetésében is fontos szerepet kap az MAA architektúra, azaz a Maximum Availability Architecture. Innen letöltheto a pdf: Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine.

    Read the article

  • The Eight Most-Important EBS Techstack Stories in 2010

    - by Steven Chan
    I've never really understood the custom of stuffing a summary of one's family's activities for the year in a Christmas/Hanukkah/Kwanzaa card.  It seems a little self-congratulatory and impersonal.  I'd rather my friends kept authentically in touch throughout the year, but perhaps that's just me.Nonetheless, I see the value of a year-end summary in the IT industry.  I spend a lot of time helping our customers understand the latest new developments... and straightening out confusion over changes to the old and familiar.  It can be hard to keep up with the latest news in this space.Here are the eight most-important news items for 2010, with suggested actions for Apps DBAs:Premier Support for EBS 11.5.10 ended on November 30, 2010You need to be on a minimum baseline of 11.5.10 patches to be eligible for Extended Support.  New patches for EBS 11i released during the Extended Support period will be produced only for the minimum baseline configuration.Action: Ensure that your EBS 11i environments meet the minimum baseline requirements. Minimum Baselines are Emerging for EBS 12.0 Extended SupportExtended Support for EBS 12.0 begins on February 1, 2012.  That's only 13 months away.  Minimum baselines haven't been finalized yet, but the 12.0.6 Release Update Pack and the Financials CPC July 2009 are currently slated.  Action: Ensure that your EBS 12.0 environments meet the currently-specified baseline requirements. Sun, Windows, and Linux users should have upgraded to JDK 6 by nowJDK 5's End of Service Life was October 30, 2009 for those three platforms.  If you're running the E-Business Suite on Sun, Windows, or Linux, you should upgrade your EBS servers to JDK 6.  Alternatively, you can purchase Java for Business support (the equivalent of Extended Support for Java). Action: Upgrade your Sun, Windows, or Linux EBS servers to JDK 6. Premier Support for Database 10gR2 ended on July 31, 2010The 10gR2 Database is now in Extended Support.  If you're still on 10gR2, you should start planning your upgrade to a higher certified database version such as 11gR2 11.2.0.2.Action: Upgrade to 10gR2 databases to 11gR2 11.2.0.2. 

    Read the article

  • DC Comics Identifies Krypton on the Star Map

    - by Jason Fitzpatrick
    This week Action Comics Superman #14 hits the stands and DC comics reveals the actual location of Kyrpton, delivered by none other than beloved astrophysicist Neil Tyson. Phil Plait at Bad Astronomy reports on the resolution of fans’ long standing curiosity about the location of Krypton: Well, that’s about to change. DC comics is releasing a new book this week – Action Comics Superman #14 – that finally reveals the answer to this stellar question. And they picked a special guest to reveal it: my old friend Neil Tyson. Actually, Neil did more than just appear in the comic: he was approached by DC to find a good star to fit the story. Red supergiants don’t work; they explode as supernovae when they are too young to have an advanced civilization rise on any orbiting planets. Red giants aren’t a great fit either; they can be old, but none is at the right distance to match the storyline. It would have to be a red dwarf: there are lots of them, they can be very old, and some are close enough to fit the plot. I won’t keep you in suspense: the star is LHS 2520, a red dwarf in the southern constellation of Corvus (at the center of the picture here). It’s an M3.5 dwarf, meaning it has about a quarter of the Sun’s mass, a third its diameter, roughly half the Sun’s temperature, and a luminosity of a mere 1% of our Sun’s. It’s only 27 light years away – very close on the scale of the galaxy – but such a dim bulb you need a telescope to see it at all (for any astronomers out there, the coordinates are RA: 12h 10m 5.77s, Dec: -15° 4m 17.9 s). 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Error while running Jetty Server on port 80 as non root user

    - by user75016
    All, I was trying to setup jetty on port 80 but its giving exception saying permission denied as below. I have setup jetty to work with setuid and configured start.ini as follows: OPTIONS=Server,jsp,jmx,resources,websocket,ext,plus,annotations,jta,jdbc,setuid (below as first configuration file in start.ini) etc/jetty-setuid.xml and jetty-setuid.xml file with username and group name of non root user. 2012-07-03 15:29:02.411:INFO:oejdp.ScanningAppProvider:Deployment monitor /opt/jetty-hightide-8.1.3.v20120416/contexts at interval 1 2012-07-03 15:29:02.454:WARN:oejuc.AbstractLifeCycle:FAILED [email protected]:80: java.net.SocketException: Permission denied java.net.SocketException: Permission denied at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:182) at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:311) at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:260) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59) at org.eclipse.jetty.server.Server.doStart(Server.java:273) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1215) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1138) 2012-07-03 15:29:02.455:WARN:oejuc.AbstractLifeCycle:FAILED org.eclipse.jetty.server.Server@66da9ea4: java.net.SocketException: Permission denied java.net.SocketException: Permission denied

    Read the article

  • update-java-alternatives vs update-alternatives --config java

    - by Stan Smith
    Thanks in advance from this Ubuntu noob... On Ubuntu 12.04 LTS I have installed Sun's JDK7, Eclipse, and the Arduino IDE. I want the Arduino to use OpenJDK 6 and want Eclipse to use Sun's JDK 7. From my understanding I need to manually choose which Java to use before running each application. This led me to the 'update-java-alternatives -l' command... when I run this I only see the following: java-1.6.0-openjdk-amd64 1061 /usr/lib/jvm/java-1.6.0-openjdk-amd64 ...but when I run 'update-alternatives --config java' I see the following: *0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java manual mode 2 /usr/lib/jvm/jdk1.7.0/bin/java manual mode 3 /usr/lib/jvm/jre1.7.0/bin/java manual mode I don't understand why the update-java-alternatives doesn't display the same 3 options. I also don't understand how to switch between OpenJDK6 and JDK7. Can someone please explain how I can go about using the OpenJDK6 for Arduino development and Sun JDK7 for Eclipse/Android development? Thank you in advance for any assistance or feedback you can offer. Stan

    Read the article

  • Product Update Bulletin: Oracle Solaris Cluster October 2013

    - by uwes
    Announcing new qualifications and general news for the Oracle Solaris Cluster product. Hardware Qualifications Sun Server X4-2 and X4-2L servers, Sun Blade X4-2B server module with Oracle Solaris Cluster 3.3 Sun Storage 16 Gb Fibre Channel ExpressModule Universal HBA, Emulex Oracle Dual Port QDR InfiniBand Adapter M3 Software Qualifications Oracle Database 12c Real Application Cluster with Oracle Solaris Cluster 4.1 Oracle Database 11.2.0.4 single instance and RAC with Oracle Solaris Cluster 4.1 Oracle VM server for SPARC 3.1 SAP Netweaver with new kernel versions ZFS Storage Appliance Kit version 2011.1.7.0 and 2013.1.0.0 Application monitoring in Oracle VM for SPARC failover guest domain Storage Partner Update Oracle Solaris Cluster 3.3 3/13 with the HDS Enterprise Storage arrays EMC SRDF for Oracle database 12c RAC in Oracle Solaris Cluster 4.1 geo cluster configuration Oracle Solaris Cluster References Korea Enterprise Data, HDFC Securities, Dealis Fund Operations Web Updates New blog entry: Oracle Solaris 10 Brand Zone cluster Solaris Application Engineering website now includes Oracle Solaris Cluster application support information Please read the Oracle Solaris Cluster Product Update Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) _____________________________________________________________________ For More Information Go To:Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster pageOracle Solaris Cluster mos communityPartner web Oracle Solaris Cluster pageOracle Solaris Cluster Blog Solaris.us.oracle.com page

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >