Search Results

Search found 8770 results on 351 pages for 'sun zfs storage 7000 appl'.

Page 82/351 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • MXMLC Ant task results in java.lang.OutOFMemoryError

    - by Mims H. Wright
    I'm making a change to a set of code for a Flex project that I didn't write and was set up to compile using ant tasks. I assume that the codebase was stable at the last checkin but I'm running into memory issues when trying to build a project using MXMLC and ant (see stack trace below). Before, I was just getting an out of memory error. I tried using a different machine and got this more verbose exception (including problems with the image fetcher). I've tried using various versions of the SDK, I've tried replacing the <mxmlc> tag with <exec executable="mxmlc"> with no luck. Here is my java version in case that has anything to do with it: » java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02-279-10M3065) Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01-279, mixed mode) Any help would be appreciated. Thanks! Buildfile: build.xml compileSWF: [echo] Compiling main.swf... [mxmlc] Loading configuration file /Applications/Adobe Flash Builder 4 Plug-in/sdks/4.0.0beta2/frameworks/flex-config.xml [mxmlc] Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space [mxmlc] at java.awt.image.PixelGrabber.setDimensions(PixelGrabber.java:360) [mxmlc] at sun.awt.image.ImageDecoder.setDimensions(ImageDecoder.java:62) [mxmlc] at sun.awt.image.JPEGImageDecoder.sendHeaderInfo(JPEGImageDecoder.java:71) [mxmlc] at sun.awt.image.JPEGImageDecoder.readImage(Native Method) [mxmlc] at sun.awt.image.JPEGImageDecoder.produceImage(JPEGImageDecoder.java:119) [mxmlc] at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) [mxmlc] at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) [mxmlc] at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) [mxmlc] /src/com/amtrak/components/map/MapAsset.mxml: Error: exception during transcoding: Failed to grab pixels for image /src/assets/embed_assets/images/zoomed_map_wide.jpg [mxmlc] [mxmlc] /src/com/amtrak/components/map/MapAsset.mxml: Error: Unable to transcode /assets/embed_assets/images/zoomed_map_wide.jpg. [mxmlc] [mxmlc] Error: Java heap space [mxmlc] [mxmlc] java.lang.OutOfMemoryError: Java heap space [mxmlc] at java.util.ArrayList.<init>(ArrayList.java:112) [mxmlc] at macromedia.asc.util.ObjectList.<init>(ObjectList.java:30) [mxmlc] at macromedia.asc.parser.ArgumentListNode.<init>(ArgumentListNode.java:30) [mxmlc] at macromedia.asc.parser.NodeFactory.argumentList(NodeFactory.java:116) [mxmlc] at macromedia.asc.parser.NodeFactory.argumentList(NodeFactory.java:97) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBinding(ImplementationGenerator.java:563) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBindingsSetupFunction(ImplementationGenerator.java:864) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBindingsSetup(ImplementationGenerator.java:813) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateInitializerSupportDefs(ImplementationGenerator.java:1813) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateClassDefinition(ImplementationGenerator.java:1005) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.<init>(ImplementationGenerator.java:201) [mxmlc] at flex2.compiler.mxml.ImplementationCompiler.generateImplementationAST(ImplementationCompiler.java:498) [mxmlc] at flex2.compiler.mxml.ImplementationCompiler.parse1(ImplementationCompiler.java:196) [mxmlc] at flex2.compiler.mxml.MxmlCompiler.parse1(MxmlCompiler.java:168) [mxmlc] at flex2.compiler.CompilerAPI.parse1(CompilerAPI.java:2851) [mxmlc] at flex2.compiler.CompilerAPI.parse1(CompilerAPI.java:2804) [mxmlc] at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:446) [mxmlc] at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) [mxmlc] at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1488) [mxmlc] at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1375) [mxmlc] at flex2.tools.Mxmlc.mxmlc(Mxmlc.java:282) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [mxmlc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [mxmlc] at java.lang.reflect.Method.invoke(Method.java:597) [mxmlc] at flex.ant.FlexTask.executeInProcess(FlexTask.java:280) [mxmlc] at flex.ant.FlexTask.execute(FlexTask.java:225) [mxmlc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [mxmlc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [mxmlc] at java.lang.reflect.Method.invoke(Method.java:597) BUILD FAILED /src/build.xml:49: mxmlc task failed

    Read the article

  • OWSM custom security policy for JAX-WS, GenericFault

    - by sachin
    Hi, I tried creating custom security and policy as given here: http://download.oracle.com/docs/cd/E15523_01/relnotes.1111/e10132/owsm.htm#CIADFGGC when I run the service client custom assertion is executed, returning successfully. public IResult execute(IContext context) throws WSMException { try { System.out.println("public execute"); IAssertionBindings bindings = ((SimpleAssertion)(this.assertion)).getBindings(); IConfig config = bindings.getConfigs().get(0); IPropertySet propertyset = config.getPropertySets().get(0); String valid_ips = propertyset.getPropertyByName("valid_ips").getValue(); String ipAddr = ((IMessageContext)context).getRemoteAddr(); IResult result = new Result(); System.out.println("valid_ips "+valid_ips); if (valid_ips != null && valid_ips.trim().length() > 0) { String[] valid_ips_array = valid_ips.split(","); boolean isPresent = false; for (String valid_ip : valid_ips_array) { if (ipAddr.equals(valid_ip.trim())) { isPresent = true; } } System.out.println("isPresent "+isPresent); if (isPresent) { result.setStatus(IResult.SUCCEEDED); } else { result.setStatus(IResult.FAILED); result.setFault(new WSMException(WSMException.FAULT_FAILED_CHECK)); } } else { result.setStatus(IResult.SUCCEEDED); } System.out.println("result "+result); System.out.println("public execute complete"); return result; } catch (Exception e) { System.out.println("Exception e"); e.printStackTrace(); throw new WSMException(WSMException.FAULT_FAILED_CHECK, e); } } Console output is: public execute valid_ips 127.0.0.1,192.168.1.1 isPresent true result Succeeded public execute complete but, webservice throws GenericFault . Arguments: [void] Fault: GenericFault : generic error I have no clue what could be wrong, any ideas? here is the full stack trace: Exception in thread "main" javax.xml.ws.soap.SOAPFaultException: GenericFault : generic error at com.sun.xml.internal.ws.fault.SOAP12Fault.getProtocolException(SOAP12Fault.java:210) at com.sun.xml.internal.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:119) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:108) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:107) at $Proxy30.sayHello(Unknown Source) at creditproxy.CreditRatingSoap12HttpPortClient.main(CreditRatingSoap12HttpPortClient.java:21) Caused by: javax.xml.ws.soap.SOAPFaultException: GenericFault : generic error at weblogic.wsee.jaxws.framework.jaxrpc.TubeFactory$JAXRPCTube.processRequest(TubeFactory.java:203) at weblogic.wsee.jaxws.tubeline.FlowControlTube.processRequest(FlowControlTube.java:99) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:604) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:563) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:548) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:445) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:275) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:454) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:250) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:140) at weblogic.wsee.jaxws.HttpServletAdapter$AuthorizedInvoke.run(HttpServletAdapter.java:319) at weblogic.wsee.jaxws.HttpServletAdapter.post(HttpServletAdapter.java:232) at weblogic.wsee.jaxws.JAXWSServlet.doPost(JAXWSServlet.java:310) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at weblogic.wsee.jaxws.JAXWSServlet.service(JAXWSServlet.java:87) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:326) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Process exited with exit code 1.

    Read the article

  • Java & Tomcat: SQL JDBC/JNDI Exceptions

    - by user267581
    I am deploying a webapp from eclipse to tomcat. I am having an issue with my application and JNDI lookups. When the app tries to load the JNDI resource I am this stacktrace: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect] at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:134) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:144) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.objectweb.carol.jndi.spi.MultiContext.lookup(MultiContext.java:118) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.theriabook.daoflex.JDBCConnection.getDataSource(JDBCConnection.java:61) at com.theriabook.daoflex.JDBCConnection.getConnection(JDBCConnection.java:73) at com.aramark.data.UsersDAO.doLogin(UsersDAO.java:751) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at flex.messaging.services.remoting.adapters.JavaAdapter.invoke(JavaAdapter.java:406) at flex.messaging.services.RemotingService.serviceMessage(RemotingService.java:183) at flex.messaging.MessageBroker.routeMessageToService(MessageBroker.java:1417) at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:878) at com.farata.remoting.CustomAMFEndpoint.serviceMessage(CustomAMFEndpoint.java:23) at flex.messaging.endpoints.amf.MessageBrokerFilter.invoke(MessageBrokerFilter.java:121) at flex.messaging.endpoints.amf.LegacyFilter.invoke(LegacyFilter.java:158) at flex.messaging.endpoints.amf.SessionFilter.invoke(SessionFilter.java:49) at flex.messaging.endpoints.amf.BatchProcessFilter.invoke(BatchProcessFilter.java:67) at flex.messaging.endpoints.amf.SerializationFilter.invoke(SerializationFilter.java:146) at flex.messaging.endpoints.BaseHTTPEndpoint.service(BaseHTTPEndpoint.java:274) at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:377) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.cti.compiler.env.web.CompilerInvocationInterceptor.doFilter(CompilerInvocationInterceptor.java:25) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97) ... 41 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) I am really stumped at this error. I am using a WinXP running on a Virtual Machine (VMWare) from a MAC. Any ideas? I have uninstalled/reinstalled tomcat multiple times.

    Read the article

  • SunTlsRsaPremasterSecret KeyGenerator not available

    - by Jill
    Hi, I encountered an error when my application tries to load a RSA Algorithm provider class from JAVA. The exception stack is as follow: javax.jms.JMSException: RSA premaster secret error at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49) at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1255) at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQConnection.java:1350) at org.apache.activemq.ActiveMQConnection.setClientID(ActiveMQConnection.java:388) at com.trendmicro.tmsm.TMSMAgent.open(TMSMAgent.java:63) Caused by: javax.net.ssl.SSLKeyException: RSA premaster secret error at com.sun.net.ssl.internal.ssl.RSAClientKeyExchange.<init>(RSAClientKeyExchange.java:97) at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverHelloDone(ClientHandshaker.java:634) at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:226) at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:516) at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.java:454) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:884) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1112) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:623) at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59) at org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(TcpBufferedOutputStream.java:115) at java.io.DataOutputStream.flush(DataOutputStream.java:106) at org.apache.activemq.transport.tcp.TcpTransport.oneway(TcpTransport.java:167) at org.apache.activemq.transport.InactivityMonitor.oneway(InactivityMonitor.java:237) at org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:168) at org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:84) at org.apache.activemq.transport.WireFormatNegotiator.start(WireFormatNegotiator.java:74) at org.apache.activemq.transport.failover.FailoverTransport.doReconnect(FailoverTransport.java:715) at org.apache.activemq.transport.failover.FailoverTransport$2.iterate(FailoverTransport.java:115) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:637) Caused by: java.security.NoSuchAlgorithmException: SunTlsRsaPremasterSecret KeyGenerator not available at javax.crypto.KeyGenerator.<init>(DashoA13*..) at javax.crypto.KeyGenerator.getInstance(DashoA13*..) at com.sun.net.ssl.internal.ssl.JsseJce.getKeyGenerator(JsseJce.java:223) at com.sun.net.ssl.internal.ssl.RSAClientKeyExchange.<init>(RSAClientKeyExchange.java:89) ... 22 more I've googled the error message and most of posts says it's because JVM cannot find sunjce_provider.jar. However, I can find the file in /Library/Java/Home/lib/ext folder. The platform is Mac OS X 10.6 and Java version is 1.6.0_17. My questions are: Why JVM does not search /Library/Java/Home/lib/ext for jar files? Can we change CLASSPATH or java.ext.dirs property by modify any config file? Any suggestion to solve this problem? Thanks in advance.

    Read the article

  • App dies on startup but not crash report

    - by brettr
    I've given an ad hoc version of my app to some users. Two of them have the app die on start up while one user has no issues. I can also install the ad hoc without issue...but that is always the case for me. One user sent the info below from the Xcode Organizer Console. They didn't find any crash logs. I don't know what to make of the info below. The one thing that stands out is "Permission denied". I place the provisioning and myapp.app files in a dropbox folder. The user then retrieves the files from the same location. I've run codesign against the .app file in the dropbox and get valid output: codesign -vvvv myapp.app myapp.app: valid on disk myapp.app: satisfies its Designated Requirement Any one have some ideas how I can figure out why the app doesn't work for this user? Here is the Console output from one user. They couldn't find any associated crash logs: Stats totalMLSITDBPostProcessing=5.31s commands=0.01 misc=0.45s icuSort=4.41s (MLS_icu_data=0.23s, MLS_icu_sec_data=0.13, dropIdx=0.04, normalize=0.13, update_orders=1.31, tStatsICUOther1=0.02, createIndex=2.50) Sun Dec 13 12:35:04 unknown com.apple.launchd[1] <Error>: (UIKitApplication:com.cygen.myapp[0x8cb6]) posix_spawn("/var/mobile/Applications/4B036396-3294-4E0A-BBCC-4118E72846D4/myapp.app/myapp", ...): Permission denied Sun Dec 13 12:35:04 unknown com.apple.launchd[1] <Warning>: (UIKitApplication:com.cygen.myapp[0x8cb6]) Exited with exit code: 1 Sun Dec 13 12:35:04 unknown SpringBoard[24] <Warning>: Failed to spawn myapp. Unable to obtain a task name port right for pid 179: (os/kern) failure Sun Dec 13 12:35:04 unknown com.apple.launchd[1] <Warning>: (UIKitApplication:com.cygen.myapp[0x8cb6]) Throttling respawn: Will start in 2147483647 seconds Sun Dec 13 12:35:04 unknown SpringBoard[24] <Warning>: Application 'myapp' exited abnormally with exit status 1 Sun Dec 13 12:35:10 unknown springboardservicesrelay[155] <Warning>: Unable to parse property list data of length: 0 Sun Dec 13 12:35:13 unknown com.apple.launchd[1] <Error>: (UIKitApplication:com.cygen.myapp[0x3ce5]) posix_spawn("/var/mobile/Applications/4B036396-3294-4E0A-BBCC-4118E72846D4/myapp.app/myapp", ...): Permission denied Sun Dec 13 12:35:13 unknown com.apple.launchd[1] <Warning>: (UIKitApplication:com.cygen.myapp[0x3ce5]) Exited with exit code: 1 Sun Dec 13 12:35:13 unknown SpringBoard[24] <Warning>: Failed to spawn myapp. Unable to obtain a task name port right for pid 182: (os/kern) failure Sun Dec 13 12:35:13 unknown com.apple.launchd[1] <Warning>: (UIKitApplication:com.cygen.myapp[0x3ce5]) Throttling respawn: Will start in 2147483647 seconds Sun Dec 13 12:35:13 unknown SpringBoard[24] <Warning>: Application 'myapp' exited abnormally with exit status 1

    Read the article

  • Why I get java.net.SocketException: Connection reset

    - by Jammy
    I need sent some requests to server side and get reponse, sometimes when I call specific method to run the flollowing common code, I get one error in line(addToCookieJar(connection);), any idea how this get happened? URL url = new URL(providerURL); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); connection.setRequestProperty("Content-Type", "application/octet-stream"); // We understand gzip encoding connection.addRequestProperty("Accept-Encoding", "gzip"); if (cookie != null && cookieHandler != null) { connection.setRequestProperty("Cookie", cookie); } if (cookieHandler == null) { addFromCookieJar(connection); } // Send the request ObjectOutputStream oos = new ObjectOutputStream(connection.getOutputStream()); oos.writeObject(remote.getName()); oos.writeObject(m.getName()); // method name oos.writeObject(m.getParameterTypes()); // formal parameters oos.writeObject(args); // actual parameters oos.flush(); oos.close(); if (cookieHandler == null) { cookieJar.put(new URI(providerURL), connection.getHeaderFields()); } Exception: java.lang.reflect.UndeclaredThrowableException at $Proxy0.updateDocument(Unknown Source) at com.agst.ui.gantt.GanttPanel.doUpdateDocument(GanttPanel.java:1931) at com.agst.ui.gantt.GanttPanel.save(GanttPanel.java:1419) at com.agst.ui.gantt.GanttPanel$4.run(GanttPanel.java:1673) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:196) at com.agst.rmi.RemoteCallHandler.invoke(RemoteCallHandler.java:142) ... 5 more Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFields(Unknown Source) at com.agst.rmi.RemoteCallHandler.addToCookieJar(RemoteCallHandler.java:529) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:192) ... 6 more

    Read the article

  • can't load IA 32-bit .dll on a AMD 64 bit platform

    - by user101425
    I have a Windows 2003 64 bit terminal server which we run a Java application from. The application has always worked up until 2 days ago. No new updates have been installed to the server in that time frame. I have tried re-installing java 64 bit but still have the following error. Unexpected exception: java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.javaws.Launcher.executeApplication(Unknown Source) at com.sun.javaws.Launcher.executeMainClass(Unknown Source) at com.sun.javaws.Launcher.doLaunchApp(Unknown Source) at com.sun.javaws.Launcher.run(Unknown Source) at java.lang.Thread.run(Unknown Source) **Caused by: java.lang.UnsatisfiedLinkError: C:\Documents and Settings\administrator\Application Data\Sun\Java\Deployment\cache\6.0\19\625835d3-5826d302-n\swt-win32-3116.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform** at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:100) at org.eclipse.swt.internal.win32.OS.<clinit>(OS.java:18) at org.eclipse.swt.graphics.Device.init(Device.java:563) at org.eclipse.swt.widgets.Display.init(Display.java:1784) at org.eclipse.swt.graphics.Device.<init>(Device.java:99) at org.eclipse.swt.widgets.Display.<init>(Display.java:363) at org.eclipse.swt.widgets.Display.<init>(Display.java:359) at com.ko.StartKO.main(StartKO.java:57) ... 9 more

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Using JBoss EL with Websphere

    - by rat
    Hey, I'm doing a project which is going to run on Websphere. I'm using JSF/Facelets/Richfaces for this project. I want to use the JBoss EL implementation as it allows calling methods with parameters from EL etc. ... usually this is accomplished by getting the JBoss EL jar and then putting this in the web.xml: <context-param> <param-name>com.sun.faces.expressionFactory</param-name> <param-value>org.jboss.el.ExpressionFactoryImpl</param-value> </context-param> However this isn't working ... I don't know if its a problem with Websphere or ...??? I get a stack trace when going to the page saying it can't parse the EL where I have passed a method a parameter: <a4j:commandLink value="Delete" action="#{mcsaAdmin.deleteLanguage(1234)}" /> Looking at the stacktrace it appears to still be using the standard sun EL: Caused by: javax.el.ELException: Error Parsing: #{mcsaAdmin.deleteLanguage(1234)} at com.sun.el.lang.ExpressionBuilder.createNodeInternal(Unknown Source) at com.sun.el.lang.ExpressionBuilder.build(Unknown Source) at com.sun.el.lang.ExpressionBuilder.createMethodExpression(Unknown Source) at com.sun.el.ExpressionFactoryImpl.createMethodExpression(Unknown Source) at com.sun.facelets.tag.TagAttribute.getMethodExpression(TagAttribute.java:141) Note the 'com.sun.el.ExpressionFactoryImpl' instead of 'org.jboss.el.ExpressionFactoryImpl' as specified above ... Am I doing something obviously wrong? Anyone have any ideas... I'm using standard JSF implementation from majorra project or whatever provided on sun website and richfaces 3.1.4 and facelets 1.1.14.

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • Wine 1.6.2-Trying to switch to 32-bit Wineprefix from 64-bit Wine (Trusty 14.04). Can anyone help me out?

    - by AlternateSteve90
    Hello fellow Ubuntu users, I'm having a little trouble with Wine 1.6.1 and I was wondering if someone could help me out. I recently downloaded some 32-bit games that I'd wanted to try(BeamNG Drive and Bugbear's Next Car Game demo) and I had run into some trouble trying to get either of these games to run. So I came across a couple pieces of advice on the 'Net, one here on the Ubuntu community site and the other at BeamNG's forums, on how to create a 32-bit wineprefix on a 64-bit setup. I managed to be able to create the wine32 folder, but now I'm having trouble making it my default Wine setup. Anybody have any idea how I can do that? I'll post the URLs for said advice, btw: http://www.beamng.com/threads/1788-Installing-DRIVE-under-Linux-via-Wine How do I create a 32-bit WINE prefix? Here's what I've tried so far in the Terminal: steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' wine: created the configuration directory '/home/steven/wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10ee890, overlapped 0x10ee89c): stub wine: configuration in '/home/steven/wine32' has been updated. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=$HOME/.wine32 wine dxsetup.exe wine: created the configuration directory '/home/steven/.wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x103e2b8, overlapped 0x103e2d0): stub fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10fe890, overlapped 0x10fe89c): stub wine: configuration in '/home/steven/.wine32' has been updated. wine: cannot find L"C:\windows\system32\dxsetup.exe" steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win64 winecfgsteven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win32 winecfg wine: WINEARCH set to win32 but '/home/steven/.wine' is a 64-bit installation. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH=win32 wine wineboot steven@steven-HP-Pavilion-17-Notebook-PC:~$ So, yeah. TBH, though, I'm far from an expert and perhaps I've been going about it all the wrong way. In the meantime, I'll try to keep looking for solutions on my own, but if anybody can help me solve this dilemma, especially if anyone happens to own any of these two games in particular, I'd appreciate it. :)

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • Why does storage's performance change at various queue depths?

    - by Mxx
    I'm in the market for a storage upgrade for our servers. I'm looking at benchmarks of various PCIe SSD devices and in comparisons I see that IOPS change at various queue depths. How can that be and why is that happening? The way I understand things is: I have a device with maximum (theoretical) of 100k IOPS. If my workload consistently produces 100,001 IOPS, I'll have a queue depth of 1, am I correct? However, from what I see in benchmarks some devices run slower at lower queue depths, then speedup at depth of 4-64 and then slow down again at even larger depths. Isn't queue depths a property of OS(or perhaps storage controller), so why would that affect IOPS?

    Read the article

  • Download the ZFSSA Objection Handling document (PDF)

    - by swalker
    View and download the new ZFS Storage Appliance objection handling document from the Oracle HW Technical Resource Centre here. This document aims to address the most common objections encountered when positioning the ZFS Storage Appliance disk systems in production environments. It will help you to be more successful in establishing the undeniable benefits of the Oracle ZFS Storage Appliance in your customers´ IT environments. If you do not already have an account to access the Oracle Hardware Technical Resource Centre, please click here and follow the instructions to register.

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Welcome to our Friday tips series!

    - by Chris Kawalek
    Today we're starting a brand new blog series. For your Friday afternoon reading, we'll be posting a technical tip or question and answer on a technical topic. We'll start by introducing ideas on our own, but we'd really like it if you were involved and asked us questions via Twitter! Tag your tweet with #AskOracleVirtualization and we'll consider your question for the blog. Today's tip is on Storage and Oracle Virtual Desktop Infrastructure: Question: I run Oracle Virtual Desktop 3.4.1 on Solaris and use a local ZFS storage pool.  How should I configure my ZFS ARC cache?  Answer by John Renko, Consulting Developer, Oracle: Oracle recommends about 5G of ARC cache per template in use to achieve up to a 90% disk read offload. Set your ARC min=max to reserve the maximum amount of your remaining memory for your running VMs. In /etc/system: set zfs:zfs_arc_min = 5368709120 set zfs:zfs_arc_max = 5368709120 The amount you need to reserve will depend on your template but this has proven to be a great start for a typical windows 7 VM running productivity applications.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • ????????????????? Oracle Solaris ??? - Solaris 11 ????(??)

    - by kazun
    ???????????????? OS ????????????????????? 20 ??????????????????? Solaris?????????????????????? OS????????????????????????Solaris ???????????????????? ??????Oracle Solaris ??????????????6??????????????????Oracle Solaris ?????????????????????? [????????????] ?????(????????????? ???)?????(??????????????)?????(??????????????)?????(????????)???? ?(?????????????????)?????(???????????)(50??) Solaris????? ??: Solaris ??????????????????????????????????????Solaris ??Solaris ???????????????????????????? 10 ??? Solaris ?????????????????????? OS ?????????????????????????????????????????????????????????????? OS ???? Solaris ?????? ??: Solaris ????????????????????????????OS?????????????????????????????????????????????????????????????????????????????????? ???: ??????????????????????? 15 ??????????????????????????????????????????? Solaris ???????????????????????????????????????????????????????????5 ???10 ??????????????????????????? Solaris ???????????? ??: ??? Solaris ????????????????????????????????????????????????????????????????????????????????????????????? OS ?????????????????????????????????????????????????????????????????????????????????????????????? ??: ??????? OS ?????????????????????????????????????????????????????????????????????????DTrace ????????????????????????????? Solaris ???????? ???: 1980 ?????Sun ???????OS???????????????????????????????????????????????????????????????????????????????? ?Solaris ????1988?????Sun ? AT&T ? System V Release 4.0 ?????????????Sun ????????? System V Release 4.0 ?????????? Unix OS ??? SunOS 5.0(?? 1992 ????????? Solaris 2.0)???????????????????? SMP ?????????????????????????????????????????????????????Solaris 2.0 ?????????????? OS ????????(?)???????????????????????????????? 2.1, 2.2, 2.3 ??????????? 2.0 ????????????????? ??: Solaris ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????OS?????????? ??: ????????????????????????? ???: ????????????????? OS ??????????? x86 ? SPARC ? 2 ???????????????????????????????????????????????????????????????????????????????????????????????? CPU ????????????????????????????????????????????????????????????????????????????????? Solaris ??? ????? ?? Solaris 2.0 - 2.1 AT&T ? UNIX System V Release 4 ??????????????SMP/?????????????????API ????? SunOS 5.0 ????????Solaris 2.0 ????x86 ?? SPARC ?? Solaris 2.1 ?????????????????????????????? Solaris 2.2 - 2.4 - ???????? (SVR4 ??) - ???????????(2.5.1 ?? PowerPC ?????????) - 2.4 ?? x86 ?????????????????? Solaris 2.6?Solaris 7(2.7 ??????) 64 bit ??(Itanium ?????) Solaris 8 - 9 ?????????? Solaris 10 - 11 ??????????????????(OS ????????????????) Solaris??????? ??: ?????????????????????????????????????????Solaris ?????????????????????????????????????????????????????????????????? ??: ?????????????????? OS ???????????Solaris 2.3 ???????????????????????????????????????????????????????????????????????????????????????????????????? ???: ???????????????????????????????????????????????????????????????????????????????????? OS ????????????? Solaris ??????? ??: ??????????????????????????????????????????????????????????????????????????????????????????????????? ??: ??????????????????????????????????????????????????????????????????????????DTrace ?????????????????????????????????? ??: ????Solaris ? UNIX ????????????????????????????????????? OS ??????????????????????????????????????????????????????????????????????????????????????????????????????????????? Solaris ????????????????????????????????????????????????? OS ??????????????????? Solaris ???????? ??: ??????????????????????????1?? OS ?????????????? OS ??????????????????????????????????????Solaris ????? 1 ?? Solaris ??????????????????OS????????????????? ???: Legacy Container ????????????? Solaris(Solaris 8?Solaris 9) ???????????? ??: Solaris 10 ?????? SMF(Service Management Facility) ??????????????????????????????????????????????????????????????????????????????????????? ??: SMF?FMA (Fault Management Architecture)?????????????????????????????? Solaris ?????????????????????????? ???: ????????????????????????? ??: ?????????????????????1???????????????????????????????????????? Sun ??????????The Network Is The Computer???? ??: ??????Solaris 10 ?????????????????????????????????????????????????????????????????????????????? Solaris?????? ??: Solaris ?????????????????????Sun ?????????????(? 20 ??)??????????????????????????????????????????????????????????????????????????????????????????????????????????????????Sun ????????????????????????????????????????????????????????Sun ?????????????????????????????????????????????????????????????????????????? ???: Solaris ????? SunOS ????????????????????????? OS ??????? ??: ??????????????????????????????Sun ???????????????????????????????????????????????????????????? ??: 1990 ??????????????????? Solaris ???????? ???: ?????X Windows ???? Sun ??????????NeWS??????????????????????????????X Windows ??????????????????????????????????????? Sun ? Solaris ???????????????????????? ??: ???????Solaris ??????????????? OS ??????????????????????????????????????????????????????????????????????????????????????? ??: ????????????????????? Solaris ????????????ZFS ???????????????????????????????? ??: ????? OS ??????? Solaris ??????(???)??????????????? ???: Solaris ???????????????????? Solaris??????Solaris 2.0 ???? 2 ??? 1 ??????????????(?????????)??????Solaris ? 2 ??????????????????????????????????????? 2 ????????????????????????????2 ??? 1 ????????????????????????????????????????????????????????????? ????OS???????????? ???: Solaris ?????? Solaris ????????????OS ??????????????????????????????????????????????????????????????????????????????? ??: ??????????????????????????????????????????????(?)?????????????????????????????????????????????15 ?????????????????????????????????????????????????????????????????????????? ??: Solaris ??????? Solaris ????????????????????????????????????????????????????????? ??: ????????????????????????????????????????????????Solaris ??????????????????????????? ???: ??????????????Solaris ????????????????????????????????????????????????????????????????????????????????????? Solaris ?????????? ??: Solaris ???????????????????????????????????????????????????? ??: ????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???: 1994 ?????????????????????????????????????? ??: ??????????????????????????Solaris ????????????????????????????????????? ???: ???????????????? ??: ???Solaris ?????? 2 ?????????????????????????????????????? Solaris ?????????????????????????????????????????????????????????????????????????????????????????????OS?????????????????????????????????????????????????????Solaris ???????????? ??????Oracle Solaris ????????????6??????????????????Oracle Solaris ??????????????????????

    Read the article

  • Hibernate Exception, what wrong ? [[Exception in thread "main" org.hibernate.InvalidMappingException

    - by user195970
    I use netbean 6.7.1 to write "hello world" witch hibernate, but I get some errors, plz help me, thank you very much. my exception init: deps-module-jar: deps-ear-jar: deps-jar: Copying 1 file to F:\Documents and Settings\My Dropbox\DropboxNetBeanProjects\loginspring\build\web\WEB-INF\classes compile-single: run-main: Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.2.5 Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : cglib Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Oct 25, 2009 2:44:06 AM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hibernate/Tbluser.hbm.xml Oct 25, 2009 2:44:06 AM org.hibernate.util.XMLHelper$ErrorLogger error SEVERE: Error parsing XML: XML InputStream(1) Document is invalid: no grammar found. Oct 25, 2009 2:44:06 AM org.hibernate.util.XMLHelper$ErrorLogger error SEVERE: Error parsing XML: XML InputStream(1) Document root element "hibernate-mapping", must match DOCTYPE root "null". Exception in thread "main" org.hibernate.InvalidMappingException: Could not parse mapping document from resource hibernate/Tbluser.hbm.xml at org.hibernate.cfg.Configuration.addResource(Configuration.java:569) at org.hibernate.cfg.Configuration.parseMappingElement(Configuration.java:1587) at org.hibernate.cfg.Configuration.parseSessionFactory(Configuration.java:1555) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1534) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1508) at org.hibernate.cfg.Configuration.configure(Configuration.java:1428) at org.hibernate.cfg.Configuration.configure(Configuration.java:1414) at hibernate.CreateTest.main(CreateTest.java:22) Caused by: org.hibernate.InvalidMappingException: Could not parse mapping document from invalid mapping at org.hibernate.cfg.Configuration.addInputStream(Configuration.java:502) at org.hibernate.cfg.Configuration.addResource(Configuration.java:566) ... 7 more Caused by: org.xml.sax.SAXParseException: Document is invalid: no grammar found. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:250) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:626) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3095) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:921) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at org.dom4j.io.SAXReader.read(SAXReader.java:465) at org.hibernate.cfg.Configuration.addInputStream(Configuration.java:499) ... 8 more Java Result: 1 BUILD SUCCESSFUL (total time: 1 second) hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/hibernate</property> <property name="hibernate.connection.username">root</property> </session-factory> </hibernate-configuration> Tbluser.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Oct 25, 2009 2:37:30 AM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="hibernate.Tbluser" table="tbluser" catalog="hibernate"> <id name="userId" type="java.lang.Integer"> <column name="userID" /> <generator class="identity" /> </id> <property name="username" type="string"> <column name="username" length="50" /> </property> <property name="password" type="string"> <column name="password" length="50" /> </property> <property name="email" type="string"> <column name="email" length="50" /> </property> <property name="phone" type="string"> <column name="phone" length="50" /> </property> <property name="groupId" type="java.lang.Integer"> <column name="groupID" /> </property> </class> </hibernate-mapping> Tbluser.java package hibernate; // Generated Oct 25, 2009 2:37:30 AM by Hibernate Tools 3.2.1.GA /** * Tbluser generated by hbm2java */ public class Tbluser implements java.io.Serializable { private Integer userId; private String username; private String password; private String email; private String phone; private Integer groupId; public Tbluser() { } public Tbluser(String username, String password, String email, String phone, Integer groupId) { this.username = username; this.password = password; this.email = email; this.phone = phone; this.groupId = groupId; } public Integer getUserId() { return this.userId; } public void setUserId(Integer userId) { this.userId = userId; } public String getUsername() { return this.username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return this.password; } public void setPassword(String password) { this.password = password; } public String getEmail() { return this.email; } public void setEmail(String email) { this.email = email; } public String getPhone() { return this.phone; } public void setPhone(String phone) { this.phone = phone; } public Integer getGroupId() { return this.groupId; } public void setGroupId(Integer groupId) { this.groupId = groupId; } }

    Read the article

  • Which one gives you more advantages: Microsoft or SUN/Oracle certification?

    - by edwin.nathaniel
    Which one gives you more advantages: Microsoft or SUN/Oracle certification? If you don't mind, please relate your answer to your career (dev, sys-admin, dba), industry (finance, general consulting, business apps, software house), and location (where you live, does it influence your decision?) as well. One reason I asked this because I heard that having MCP (in C#, MCSAD or whatever the cert is called these days) could give you extra point from the employer perspective if they're a Microsoft Partner. I don't quite understand why and am also wondering if SUN/Oracle certificate holder have similar edge/benefit. I'm also interested to know if Certification plays role where you live. For example: certification could be a big negative sign in Silicon Valley, but might be helpful if you're in other US/Canada cities or outside USA/Canada. If you are an independent software consultant, do these certification helps you win bids? I totally understand that this question might be subjective and could be "closed". But it doesn't hurt to ask I suppose. I also understand that there are people who dislike/disagree (or have negative perspective on technology certification). While there are people who have the opinion that "these are not the kind of employers you would want to work with", I respect you guys, but sometime a pure software product company might not exist in certain cities in the world. And if certification can help one's career somewhere else, I guess that means more opportunities right? I've had experienced in both platforms: (short) .NET and (not too long) Java (minus Oracle DB stuff), but I think it's about time I have to pick one of them and be good at. Where I live right now might also dictates my career albeit insignificant. Anyhow, this last paragraph is just tiny blurb. Thanks!

    Read the article

  • java runtime 6 with socks v5 proxy - Possible?

    - by rwired
    I have written an application that (amongst other things) runs a local service in windows that acts as a SOCKS v5 proxy for Firefox. I'm in the debugging phase right now and have found certain websites that don't work correctly. For example the Java Applet for Picture Uploading on Facebook.com fails because is is unable to lookup domains. My app overrides a hidden FF config setting network.proxy.socks__remote__dns setting it to true. The whole purpose of the app is to allow access to websites when behind a firewall (e.g. if the user is in China), so this setting is essential to ensure domains are resolved remotely also (and not just HTTP requests). In the JRE6 settings (documented here) there isn't an equivalent setting, and since remote DNS resolution is a feature of SOCKS v5 and not v4 as the documentation seems to imply I'm worried that it's just not possible. How can I programmatically make sure the JRE uses a SOCKS v5 proxy for all requests (including DNS)? UPDATE: Steps to reproduce this problem: Make sure you are behind a firewall that blocks (or redirects) internet access including DNS Install PuTTY and add a dynamic SSH tunnel on some port number of your choice (e.g. 9870). Then login to a remote server that has full access to the internet Launch Firefox and you will not be able to browse the web In FF network settings set the SOCKS v5 proxy to localhost:9870 In FF go to about:config, change network.proxy.socks__remote__dns to true You will now be able to browse the web. Go to facebook.com, login, go to your profile and attempt to use the picture uploader java applet to add some pictures It will fail with a series of class not found errors looking similar to: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. I believe this is failing because the JRE is unable to resolve the domain that the class resides on. I'm basing this belief on the fact that the documentation (http://java.sun.com/javase/6/docs/technotes/guides/deployment/deployment-guide/properties.html) talks only about SOCKS v4 (which as far as I know does not support remote DNS). My deployment.properties file is located in %APPDATA%\Sun\Java\Deployment. I can confirm that modifications I make in the Java Control Panel get written into that file. If instead of "Use browser setting" the network settings for Java I override and attempt to use the SOCKS proxy settings manually, I still have the issue. There does not seem to be an easy way to force the JRE to do DNS remotely through the Proxy. UPDATE 2: Without the SOCKS proxy, from my local client www.facebook.com resolves to 203.161.230.171 upload.facebook.com resolves to 64.33.88.161 Neither host is reachable (because of the firewall) If I login to the remote server, I get: www.facebook.com 69.63.187.17 upload.facebook.com 69.63.178.32 Both these IPs change after a few minutes, as it seems Facebook uses round-robin DNS and other load-balancing. With the Proxy settings set in Firefox, I can navigate to www.facebook.com without any difficulty (since DNS is being resolved remotely on the Proxy). Whey I go to the page with the Java applet it fails with the stacktrace messages I've already reported. However if I edit Windows\System32\drivers\etc\hosts, adding the correct IP for upload.facebook.com I can get the applet to load and work correctly (restart of FF is sometimes necessary). This evidence seems to support my theory that the Java Runtime is not resolving DNS on the Proxy, but instead just routing traffic though it. My application is for mass-deployment, and needs to work with java applets on other sites (not just facebook). I really need a work-around for this problem. UPDATE 3 Stacktrace dump a requested by ZZ Coder: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class Dumping class loader cache... Live entry: key=http://upload.facebook.com/controls/2008.10.10_v5.5.8/,FacebookPhotoUploader5.jar,FacebookPhotoUploader5.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://upload.facebook.com/controls/2008.10.10_v5.5.8/-threadGroup,maxpri=4] Done.

    Read the article

  • String to date (Invalid format)

    - by MrThys
    I am using Joda Time library to convert my String dates to a real date, because this seemed like the easiest solution to do this. I am using the DateTime object to do this; new DateTime(strValue); But when inserting some formats it throws me the exception; java.lang.IllegalArgumentException: Invalid format: "Mon, 30 Sep 2002 01:56:02 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 19:59:01 GMT" java.lang.IllegalArgumentException: Invalid format: "Mon, 30 Sep 2002 01:52:02 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 17:05:20 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 19:09:28 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 15:01:02 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 23:48:33 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 17:24:20 GMT" java.lang.IllegalArgumentException: Invalid format: "Sun, 29 Sep 2002 11:13:10 GMT" Is there a way to solve this, or should I use something else instead of DateTime.

    Read the article

  • Can a List<> be casted to a DataModel

    - by Ignacio
    I'm trying to do the following: public String createByMarcas() { items = (DataModel) ejbFacade.findByMarcas(current.getIdMarca().getId()); updateCurrentItem(); return "List"; } public List<Modelos> findByMarcas(int idMarca){ return em.createQuery("SELECT id, descripcion FROM Modelos WHERE id_marca ="+idMarca+"").getResultList(); } But I keep getting this expection: Caused by: javax.ejb.EJBException at com.sun.ejb.containers.BaseContainer.processSystemException(BaseContainer.java:5070) at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:4968) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4756) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1955) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1906) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:198) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:84) at $Proxy347.findByMarcas(Unknown Source) at controladores.EJB31_Generated_ModelosFacade_Intf_Bean_.findByMarcas(Unknown Source) Can anyone give a hand please? Thank you very much

    Read the article

  • How do I change a child's parent in NHibernate when cascade is delete-all-orphan?

    - by Daniel T.
    I have two entities in a bi-directional one-to-many relationship: public class Storage { public IList<Box> Boxes { get; set; } } public class Box { public Storage CurrentStorage { get; set; } } And the mapping: <class name="Storage"> <bag name="Boxes" cascade="all-delete-orphan" inverse="true"> <key column="Storage_Id" /> <one-to-many class="Box" /> </bag> </class> <class name="Box"> <many-to-one name="CurrentStorage" column="Storage_Id" /> </class> A Storage can have many Boxes, but a Box can only belong to one Storage. I have them mapped so that the one-to-many has a cascade of all-delete-orphan. My problem arises when I try to change a Box's Storage. Assuming I already ran this code: var storage1 = new Storage(); var storage2 = new Storage(); storage1.Boxes.Add(new Box()); Session.Create(storage1); Session.Create(storage2); The following code will give me an exception: // get the first and only box in the DB var existingBox = Database.GetBox().First(); // remove the box from storage1 existingBox.CurrentStorage.Boxes.Remove(existingBox); // add the box to storage2 after it's been removed from storage1 var storage2 = Database.GetStorage().Second(); storage2.Boxes.Add(existingBox); Session.Flush(); // commit changes to DB I get the following exception: NHibernate.ObjectDeletedException : deleted object would be re-saved by cascade (remove deleted object from associations) This exception occurs because I have the cascade set to all-delete-orphan. The first Storage detected that I removed the Box from its collection and marks it for deletion. However, when I added it to the second Storage (in the same session), it attempts to save the box again and the ObjectDeletedException is thrown. My question is, how do I get the Box to change its parent Storage without encountering this exception? I know one possible solution is to change the cascade to just all, but then I lose the ability to have NHibernate automatically delete a Box by simply removing it from a Storage and not re-associating it with another one. Or is this the only way to do it and I have to manually call Session.Delete on the box in order to remove it?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >