Search Results

Search found 3257 results on 131 pages for 'calvin sun'.

Page 13/131 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Oracle : nouveaux licenciements en vue pour les employés de Sun en Europe et en Asie, est-ce une bon

    Mise à jour du 07/06/10 Oracle : nouveaux licenciements en vue pour les employés de Sun En Europe et en Asie : est-ce une bonne manière de relancer la société ? Oracle va à nouveau licencier parmi les quelques 106.000 employés de Sun. Les coupes vont concerner principalement les bureaux asiatiques et européens de la société. Le nombre de postes supprimés n'a pas encore été précisé par la firme de Larry Ellison, qui a racheté Sun en fin d'année dernière. Quelques indices ont cependant filtrés. D'après l'annonce d'Oracle, ce nouveau plan social devrait coûter deux fois plus que le précédent. Qui a, lui, concerné 7.600 emp...

    Read the article

  • How to include an external jar in gwt client side?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) Anyone knows how it works?

    Read the article

  • How to include an external jar in a GWT module?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) I have commons.validator-1.3.1.jar in war/WEB-INF/lib I am using eclipse with Google Plugin. Anyone knows how it works?

    Read the article

  • Error accessing a Web Service with SSL

    - by Elie
    I have a program that is supposed to send a file to a web service, which requires an SSL connection. I run the program as follows: SET JAVA_HOME=C:\Program Files\Java\jre1.6.0_07 SET com.ibm.SSL.ConfigURL=ssl.client.props "%JAVA_HOME%\bin\java" -cp ".;Test.jar" ca.mypackage.Main This was works fine, but when I change the first line to SET JAVA_HOME=C:\Program Files\IBM\SDP\runtimes\base_v7\java\jre I get the following error: com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport error: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:119) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:140) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:86) at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Fiber.java:593) at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Fiber.java:552) at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Fiber.java:537) at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Fiber.java:434) at com.sun.xml.internal.ws.client.Stub.process(Stub.java:247) at com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:132) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:242) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:222) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:115) at $Proxy26.fileSubmit(Unknown Source) at com.testing.TestingSoapProxy.fileSubmit(TestingSoapProxy.java:81) at ca.mypackage.Main.main(Main.java:63) Caused by: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.DefaultSSLSocketFactory.a(SSLSocketFactory.java:7) at javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:1) at com.ibm.net.ssl.www2.protocol.https.c.afterConnect(c.java:110) at com.ibm.net.ssl.www2.protocol.https.d.connect(d.java:14) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:902) at com.ibm.net.ssl.www2.protocol.https.b.getOutputStream(b.java:86) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:107) ... 14 more Caused by: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.SSLJsseUtil.b(SSLJsseUtil.java:20) at javax.net.ssl.SSLSocketFactory.getDefault(SSLSocketFactory.java:36) at javax.net.ssl.HttpsURLConnection.getDefaultSSLSocketFactory(HttpsURLConnection.java:16) at javax.net.ssl.HttpsURLConnection.<init>(HttpsURLConnection.java:36) at com.ibm.net.ssl.www2.protocol.https.b.<init>(b.java:1) at com.ibm.net.ssl.www2.protocol.https.Handler.openConnection(Handler.java:11) at java.net.URL.openConnection(URL.java:995) at com.sun.xml.internal.ws.api.EndpointAddress.openConnection(EndpointAddress.java:206) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.createHttpConnection(HttpClientTransport.java:277) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:103) ... 14 more So it seems that this problem would be related to the JRE I'm using, but what doesn't seem to make sense is that the non-IBM JRE works fine, but the IBM JRE does not. Any ideas, or suggestions?

    Read the article

  • XMI format error loading project on argouml

    - by Tom Brito
    Have anyone experienced this (org.argouml.model.)XmiException opening a project lastest version of argouml? XMI format error : org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace If this file was produced by a tool other than ArgoUML, please check to make sure that the file is in a supported format, including both UML and XMI versions. If you believe that the file is legal UML/XMI and should have loaded or if it was produced by any version of ArgoUML, please report the problem as a bug by going to http://argouml.tigris.org/project_bugs.html. System Info: ArgoUML version : 0.30 Java Version : 1.6.0_15 Java Vendor : Sun Microsystems Inc. Java Vendor URL : http://java.sun.com/ Java Home Directory : /usr/lib/jvm/java-6-sun-1.6.0.15/jre Java Classpath : /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/deploy.jar Operation System : Linux, Version 2.6.31-20-generic Architecture : i386 User Name : wellington User Home Directory : /home/wellington Current Directory : /home/wellington JVM Total Memory : 34271232 JVM Free Memory : 10512336 Error occurred at : Thu Apr 01 11:21:10 BRT 2010 Cause : org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:307) at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:273) at org.argouml.persistence.XmiFilePersister.doLoad(XmiFilePersister.java:261) at org.argouml.ui.ProjectBrowser.loadProject(ProjectBrowser.java:1597) at org.argouml.ui.LoadSwingWorker.construct(LoadSwingWorker.java:89) at org.argouml.ui.SwingWorker.doConstruct(SwingWorker.java:153) at org.argouml.ui.SwingWorker$2.run(SwingWorker.java:281) at java.lang.Thread.run(Thread.java:619) Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:232) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1359) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:136) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:98) at org.netbeans.lib.jmi.xmi.SAXReader.read(SAXReader.java:56) at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:233) ... 7 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiElement$Instance.setReferenceValues(XmiElement.java:699) at org.netbeans.lib.jmi.xmi.XmiElement$Instance.resolveAttributeValue(XmiElement.java:772) at org.netbeans.lib.jmi.xmi.XmiElement$Instance. (XmiElement.java:496) at org.netbeans.lib.jmi.xmi.XmiContext.resolveInstanceOrReference(XmiContext.java:688) at org.netbeans.lib.jmi.xmi.XmiElement$ObjectValues.startSubElement(XmiElement.java:1460) at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:219) ... 22 more ------- Full exception : org.argouml.persistence.XmiFormatException: org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:298) at org.argouml.persistence.XmiFilePersister.doLoad(XmiFilePersister.java:261) at org.argouml.ui.ProjectBrowser.loadProject(ProjectBrowser.java:1597) at org.argouml.ui.LoadSwingWorker.construct(LoadSwingWorker.java:89) at org.argouml.ui.SwingWorker.doConstruct(SwingWorker.java:153) at org.argouml.ui.SwingWorker$2.run(SwingWorker.java:281) at java.lang.Thread.run(Thread.java:619) Caused by: org.argouml.model.XmiException: XMI parsing error at line: 18: Cannot set a multi-value to a non-multivalued reference:namespace at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:307) at org.argouml.persistence.ModelMemberFilePersister.readModels(ModelMemberFilePersister.java:273) ... 6 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:232) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1359) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:136) at org.netbeans.lib.jmi.xmi.XmiSAXReader.read(XmiSAXReader.java:98) at org.netbeans.lib.jmi.xmi.SAXReader.read(SAXReader.java:56) at org.argouml.model.mdr.XmiReaderImpl.parse(XmiReaderImpl.java:233) ... 7 more Caused by: org.netbeans.lib.jmi.util.DebugException: Cannot set a multi-value to a non-multivalued reference:namespace at org.netbeans.lib.jmi.xmi.XmiElement$Instance.setReferenceValues(XmiElement.java:699) at org.netbeans.lib.jmi.xmi.XmiElement$Instance.resolveAttributeValue(XmiElement.java:772) at org.netbeans.lib.jmi.xmi.XmiElement$Instance. (XmiElement.java:496) at org.netbeans.lib.jmi.xmi.XmiContext.resolveInstanceOrReference(XmiContext.java:688) at org.netbeans.lib.jmi.xmi.XmiElement$ObjectValues.startSubElement(XmiElement.java:1460) at org.netbeans.lib.jmi.xmi.XmiSAXReader.startElement(XmiSAXReader.java:219) ... 22 more the original project was created on argo v0.28.1, and (as I remember) have only use case diagrams. and yes, I'll report at the specified argo website either.. :) But anyone know anything about this exception?

    Read the article

  • Can VMWare Workstation 7.x and Sun VirtualBox 3.1.x co-exist on the same Windows7 64bit HOST togethe

    - by Heston T. Holtmann
    BACKGROUND INFO: My Old Workstation Host: 32bit Ubuntu 9.04 running Sun Virtual Box 3.x hosting Windows-XP VM Guest for Windows Software app development (VS2008, etc) My New Workstation Host: 64bit Windows7 running VMWare Workstation 7 to host 32bit Ubuntu 9.10 for linux project work. NEEDS: I need to get my original Sun-VBox Windows-XP Guest running on my new Windows7 Workstation either imported into VMWare or running on the Windows version of Sun-Virtual box (I have the VM-Guest Backed up and copied to the new computer data drive. PROBLEM: I don't need to run VM's from Both Virtual-Machine Software packages at the same time... but I do need to run some older Virtual-Machines from Sun-Virtualbox on the same 64bit Windows7 host until I can migrate those VM's to VMWare. Before switching from Linux HOST to Windows HOST, I ensured to export the VirtualBox VM to an OVF "appliance" with intentions of importing into VMWare Workstation 7.. but VMWare gives me an error stating it can't import it QUESTION: Will installing Sun Virtual Box bash or interfere with my VMWare installtion?

    Read the article

  • com.sun.management.OperatingSystemMXBean use in an OSGi bundle

    - by Paul Whelan
    I have some legacy code that was used to monitor my applications cpu,memory etc that I want to convert to a bundle. Now when i start this bundle its complaining Missing Constraint: Import-Package: com.sun.management; version="0.0.0" I had used the OperatingSystemMXBean to get access to stats on the JVM. My question is can I use this class inside an OSGI container and if so how? Or should I use some other way to monitor my application. I was making an RMI call to the application from a web frontend to get the nodes performance figures pre OSGi.

    Read the article

  • producing pixel-identical images of text between Sun Java and OpenJDK

    - by yuvi
    My release script produces images of the version number to save me the trouble of manually going into the MoinMoin wiki software and changing it by hand for each release. Unfortunately, since the fonts look a little different on each platform's JVM, the result is ugly. I solved the the font inconsistency by using Lucide Sans (comes with every Java system). (Loading Fonts from TTF files was another option, but was buggy on Mac Java). The result is much better, producing the exact same image on Mac/Windows (), but a slightly different one on OpenJDK (). I believe this is caused by OpenJDK having a different font rendering system (as opposed to different fonts). Is there any way I can get all three of my target platforms (Sun Windows, Mac, OpenJDK Linux) to produce images of text that look identical?

    Read the article

  • Can VMWare Workstation 7.x and Sun VirtualBox 3.1.x co-exist on the same Windows 7 64bit Host togeth

    - by Heston T. Holtmann
    Will installing Sun Virtual Box bash or interfere with my VMWare installtion? I don't need to run VMs from both Virtual-Machine software packages at the same time but I do need to run some older Virtual-Machines from Sun-Virtualbox on the same 64-bit Windows 7 host until I can migrate those VMs to VMWare. Before switching from Linux host to Windows host, I ensured to export the VirtualBox VM to an OVF "appliance" with intentions of importing into VMWare Workstation 7. But VMWare gives me an error stating it can't import it. Background info My old workstation host: 32-bit Ubuntu 9.04 running Sun Virtual Box 3.x hosting Windows-XP VM Guest for Windows Software app development (VS2008, etc) Needs I need to get my original Sun-VBox Windows-XP Guest running on my new Windows 7 Workstation either imported into VMWare or running on the Windows version of Sun-Virtual box (I have the VM-Guest Backed up and copied to the new computer data drive. New workstation host: 64bit Windows 7 running VMWare Workstation 7 to host 32bit Ubuntu 9.10 for linux project work.

    Read the article

  • Mac OS X: java.lang.ClassNotFoundException: com.sun.java.browser.plugin2.DOM

    - by Thilo
    I am trying to use the new LiveConnect features introduced in Java 6 Update 10. Code looks like this (copied from the applet tutorial): Class<?> c = Class.forName("com.sun.java.browser.plugin2.DOM"); Method m = c.getMethod("getDocument", java.applet.Applet.class); Document document = (Document) m.invoke(null, this); But all I am getting is a ClassNotFoundException for the entry-point class. This on the Mac, 10.6, with both Firefox and Safari. Java Plug-in 1.6.0_22 Using JRE version 1.6.0_22-b04-307-10M3261 Java HotSpot(TM) 64-Bit Server VM Is this not implemented on the Mac? Or do I need to configure something? All I need to do is get and set the value of form elements on the page, so I would be fine with an older (pre-6u10) API if that works better.

    Read the article

  • MalformedURLException with file URI

    - by Paul Reiners
    While executing the following code: doc = builder.parse(file); where doc is an instance of org.w3c.dom.Document and builder is an instance of javax.xml.parsers.DocumentBuilder, I'm getting the following exception: Exception in thread "main" java.net.MalformedURLException: unknown protocol: c at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(Unknown Source) at com.acme.ItemToThetaValues.createFiles(ItemToThetaValues.java:47) It's choking on this line of the file: <!DOCTYPE questestinterop SYSTEM "C:\Program Files\Acme\parsers\acme_full.dtd"> I am not getting this error on my machine, while a user is getting it on his machine. We are both using version 6 of the Sun JRE. This error also occurs when he's uses double backslashes in the path instead of single backslashes and when he uses forward slashes instead of backslashes. First of all, is the XML correct? Is the path expressed correctly? Second of all, why is this error occurring on one computer but not on another?

    Read the article

  • Meaning of the "Unloading class" messages

    - by elec
    Anyone can explain why the lines below appear in the output console at runtime ? (one possible answer would be full permGen, but this can be ruled out since the program only uses 24MB out of the max100MB available in PermGen) [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] [Unloading class sun.reflect.GeneratedMethodAccessor5] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor2] [Unloading class sun.reflect.GeneratedConstructorAccessor1] The program runs with the following params: -Xmx160M -XX:MaxPermSize=96M -XX:PermSize=96M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+PrintGCTaskTimeStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:/logs/gc.log There's plenty of space in the heap and in permGen.

    Read the article

  • How do I install the latest Sun Java JRE on Ubuntu Server 9.10?

    - by blackrobot
    Unfortunately, if I try to install sun-java via apt-get, it's not found in the repositories. # apt-get install sun-java6-jre Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jre is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jre has no installation candidate If I try to install it using the bin from Sun's website, here's the issue: # ./jre-6u18-linux-i586.bin (license agreement...) Do you agree to the above license terms? [yes or no] yes Unpacking... Checksumming... Extracting... ./jre-6u18-linux-i586.bin: 366: ./install.sfx.10648: not found Failed to extract the files. Please refer to the Troubleshooting section of the Installation Instructions on the download page for more information. Thanks for the help.

    Read the article

  • javax.servlet.ServletException: WriteText method cannot write null text

    - by Learner
    I have created a Web application using JSF+Icefaces+Richfaces+Primefaces.It is working great while I run it from eclipse as a project but When I created its WAR file and deployed in GlassFish Server then while rendering a page it is throwing this exception javax.servlet.ServletException: WriteText method cannot write null text I searched but didn't get any good solution.A quick help is highly appreciated Edit:1 I think this would be the relevant part for this <li class="page_item" id="liMasterSearch"> <!-- this is for hide (<li class="page_item hide" id="liMasterSearch"> applied to every class) --> <h:commandLink value="Search" action="#{masterRenderBean.showSimpleSearch}"></h:commandLink> </li> <li class="page_item" id="liAdvanceSearch"> <h:commandLink value="Advance Search" action="#{masterRenderBean.showADVS}"></h:commandLink> </li> Here you can see two links (1) Search and (2) Advance Search when I click on Search , It shows search page (By rendering-Actually I have included all pages in masterpage and render them on commandlink functions) <h:panelGroup rendered="#{not masterRenderBean.simpleSearch}"> <ui:include src="../../WebPages/SearchPages/MasterSearch.xhtml"></ui:include> </h:panelGroup> But When I click on Advance Search link (on which this part should render) <h:panelGroup rendered="#{not masterRenderBean.advs}"> <ui:include src="../../WebPages/SearchPages/PersonalAdvanceSearch.xhtml"/> </h:panelGroup> The browser show the above exception. NOTE: Keep in mind that this problem is occurring in deploying.It is not coming in actual application when I run it from eclipse from code EDIT:2 I found in server logs that this exception is coming due to acefaces and this portion of code <ace:autoCompleteEntry id="txtplaceofbirth" rows="10" autocomplete="false" minChars="2" width="150" value="#{inputPersonal.selectedplcofBirth}" filterMatchMode="none" valueChangeListener="#{inputPersonal.valueChangeEventCity}"> <f:selectItems value="#{inputPersonal.cities}"/> </ace:autoCompleteEntry></h:outputFormat> is messing up.Any idea Why this is hapening? Edit #3: Here is the full tack trace of exception [#|2012-11-19T09:55:48.026+0500|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=53;_ThreadName=Thread-2;|java.lang.NullPointerException: WriteText method cannot write null text at org.icefaces.impl.context.DOMResponseWriter.writeText(DOMResponseWriter.java:314) at org.icefaces.impl.context.DOMResponseWriter.writeText(DOMResponseWriter.java:340) at com.sun.faces.renderkit.html_basic.OutputMessageRenderer.encodeEnd(OutputMessageRenderer.java:163) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:875) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1764) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at org.icefaces.impl.renderkit.RendererWrapper.encodeChildren(RendererWrapper.java:49) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeRecursive(HtmlBasicRenderer.java:304) at com.sun.faces.renderkit.html_basic.GroupRenderer.encodeChildren(GroupRenderer.java:105) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at org.icefaces.impl.renderkit.RendererWrapper.encodeChildren(RendererWrapper.java:49) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:845) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1760) at org.icefaces.impl.context.DOMPartialViewContext.processPartial(DOMPartialViewContext.java:142) at javax.faces.component.UIViewRoot.encodeChildren(UIViewRoot.java:981) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1757) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:391) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:131) at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:288) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:121) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:594) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1542) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:281) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:722) |#]

    Read the article

  • What ever happened to Java and Sun?

    - by leeand00
    What happened to Java and Sun? The community surrounding them had some of my favorite tools and software to develop with. The Java platform anyway, still looked like it had some promise to it: Groovy and Grails. Why does all of this seem to be going the way of the dodo lately? (Yes, I know their stock price is dropping badly.) Is it just the economy? Or did the lack of cohesion (i.e., not settling on a framework) among the community finally lead to its demise?

    Read the article

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • Week 24: Karate Kid Chops, The A-Team Runs, and the OPN Team Delivers

    - by sandra.haan
    The 80's called and they want their movies back. With the summer line-up of movies reminding us to wax on and wax off one can start to wonder if there is anything new to look forward to this summer. The OPN Team is happy to report that - yes - there is. As Hannibal would say "I love it when a plan comes together"! And a plan we have; for the past 2 months we've been working to pull together the FY11 Oracle PartnerNetwork Kickoff. Listen in as Judson tells you more. While we can't offer you Bradley Cooper or Jackie Chan we can promise you an exciting line-up of guests including Safra Catz and Charles Phillips. With no lines to wait in or the annoyingly tall guy sitting in front of you this might just be the best thing you see all summer. Register now & Happy New Year, The OPN Communications Team

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Content Catalog for Oracle OpenWorld is Ready

    - by Rick Ramsey
    American Major League Baseball Umpire Jim Joyce made one of the worst calls in baseball history when he ruled Jason Donald safe at First in Wednesday's game between the Detroit Lions and the Cleveland Indians. The New York Times tells the story well. It was the 9th inning. There were two outs. And Detroit Tiger's pitcher Armando Galarraga had pitched a perfect game. Instead of becoming the 21st pitcher in Major League Baseball history to pitch a perfect game, Galarraga became the 10th pitcher in Major League Baseball history to ever lose a perfect game with two outs in the ninth inning. More insight from the New York Times here. You can avoid a similar mistake and its attendant death treats, hate mail, and self-loathing by studying the Content Catalog just released for Oracle Open World, Java One, and Oracle Develop conferences being held in San Francisco September 19-23. The Content Catalog displays all the available content related to the event, the venue, and the stream or track you're interested in. Additional filters are available to narrow down your results even more. It's simple to use and a big help. Give it a try. It'll spare you the fate of Jim Joyce. - Rick

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Querying Networking Statistics: dlstat(1M)

    - by user12612042
    Oracle Solaris 11 took another big leap forward in networking technologies providing a reliable, secure and scalable infrastructure to meet the growing needs of today's datacenter implementations. Oracle Solaris 11 introduced a new and powerful network stack architecture, also known as Project Crossbow. From Solaris 11 onwards, we introduced a command line tool viz. dlstat(1M) to query network statistics. dlstat (for datalink statistics) is a statistics querying counterpart for dladm(1M) - the datalink administration tool. The tool is very easy to get started. Just type dlstat on a shell prompt on Solaris 11 (or later). For example,: # dlstat LINK IPKTS RBYTES OPKTS OBYTES net0 834.11K 145.91M 575.19K 104.24M net1 7.87K 2.04M 0 0 In this example, the system has two datalinks net0 and net1. The output columns denote input packets/bytes as well as output packets/bytes. The numbers are abbreviated in xxx.xxUnit format. However, one could get the actual counts by simply running dlstat -u R (R for raw): # dlstat -u R LINK IPKTS RBYTES OPKTS OBYTES net0 834271 145931244 575246 104242934 net1 7869 2036958 0 0 In addition, dlstat also supports various subcommands dlstat help The following subcommands are supported: Stats : show-aggr show-ether show-link show-phys show-bridge For more info, run: dlstat help {default|} I will only describe couple of interesting subcommands/options here. For a comprehensive description of all the dlstat subcommands refer dlstat's official manual . For NICs that support multiple rings (e.g. ixgbe), dlstat show-phys -r allows us to query per Rx ring statistics. For example: dlstat show-phys -r net4 LINK TYPE INDEX IPKTS RBYTES net4 rx 0 0 0 net4 rx 1 0 0 net4 rx 2 0 0 net4 rx 3 0 0 net4 rx 4 0 0 net4 rx 5 0 0 net4 rx 6 0 0 net4 rx 7 0 0 In this case, net4 is just a vanity name for an ixgbe datalink. This view is especially useful if one wants to look at the network traffic spread across all the available rings. Furthermore, any of the dlstat commands could be run with -i option to periodically query and display stats. For example, running dlstat show-phys -r net4 -i 5 will emit per Rx ring stats every 5 seconds. This is especially useful while analyzing a live system. Similarly, dlstat show-phys -t could be used to query per Tx ring stats. -r and -t could also be combined as dlstat show-phys -rt to query both Rx as well as Tx stats at the same time. Finally, there is also a quick way to dump ALL the stats. Just run dlstat -A. You probably want to redirect this output to a file because you are going to get a whole load of stats :-).

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • Mark your calendar : Oracle Week, Nov 18-22, Herzliya

    - by Frederic Pariente
    The local ISV Engineering will be participating at the Israel Oracle Week on Nov 18-22, come meet us there! MARK YOUR CALENDAR Oracle Week Israel Date : November 18-22, 2012 Time : 09:00-16:30 Location :  Daniel HotelHerzliyaIsrael Tracks : DatabaseMiddlewareDevelopment InfrastructureBusiness ApplicationsBig Data ManagementSOA & BPMBI JavaITCloud  Here is a sample list of the Solaris 11 sessions to date, make sure to register for these. Number Name Date Track 12224 Optimizing Enterprise Applications with Oracle Solaris 11 19/11/2012 Infrastructure 12327 Oracle Solaris 11: Engineered Cloud Security with Wire-Speed Encryption and Delegated Admin 20/11/2012 Infrastructure, Cloud 12425 Simplified Lifecycle Management in Oracle Solaris 11 with AI, IPS and Ops Center 21/11/2012 Infrastructure 12528 Oracle Solaris 11 Administration: Zone, Resource Management and System Security 22/11/2012 Infrastructure 12127 Built for Cloud: Virtualization Use Cases and Technologies in Oracle Solaris 11 18/11/2012 Infrastructure, Cloud See you there!

    Read the article

  • Oye! Help Build OTN America Latina!

    - by rickramsey
    Yes, tango is passion, but it is passion born of romance. Not passion born of lust. As it is so often portrayed today. Understand that, and you will begin to understand why life in Latin America is so rich. image courtesy of Continental Magazine. You don't often get a chance to shape the direction of a technical comunidad. Somebody else gets there first and pretty soon everyone is in a rathole about the relevance of rutabagas. Or rutabagels as my public-school-educated hijas prefer to call them. Well, OTN American Latina is just starting up. If you're a techie who speaks Spanish or Portuguese, or if you just like hanging out with techies latinoamericanos (and who doesn't?), here's how to get in on the fun: Why Portuguese Speaking Techies Should Join Why Spanish Speaking Techies Should Join And here are the sites themselves: OTN America Latina in Brazilian Portuguese OTN America Latina in Spanish If you're not sure which site to visit, just remember that Brazilian Portuguese is Spanish spoken with a little body English. Ricardo System Admin and Developer Community of the Oracle Technology Network

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >