Search Results

Search found 3323 results on 133 pages for 'winter sun'.

Page 11/133 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Sun App Server Deployment Error

    - by Nick Long
    Sun App Server Deployment : When choose to precompile JSP : Throw this error com.sun.enterprise.admin.common.exception.MBeanConfigException: Component not registered then have to do asadmin undeploy Anyone know what is the reason for this error?

    Read the article

  • java applet won't work

    - by scoobi_doobi
    hey guys, this is homework stuff, but the question is not much about coding. the task is to write a java applet to work on an m-grid server. i have the server running on apache. it has a few sample applets in .jar and .class form. the .class versions work; the .jar versions work on appletviewer, but they break if I submit them as a job to the server with this: load: class examples/pixelcount/PixelCount.class not found. java.lang.ClassNotFoundException: examples.pixelcount.PixelCount.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.<init>(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: examples.pixelcount.PixelCount.class I'm not really sure where exactly is the problem in here, given that they work on appletviewer. any help would be appreciated.. EDIT: don't know if I wrote it clearly. by ".class version" i refer to html file with this content: <applet height="300" width="450" code="examples/pixelcount/PixelCount.class"></applet> and ".jar" with this content: <applet height="300" width="450" archive="PixelCount.jar" code="examples.pixelcount.PixelCount.class"></applet>

    Read the article

  • Don’t miss rare Venus transit across Sun on June 5th. Once in a life time event.

    - by Gopinath
    Space lovers here is a rare event you don’t want to miss. On June 5th or 6th of 2012,depending on which part of the globe you live, the planet Venus will pass across Sun and it will not happen again until 2117. During the six hour long spectacular transit you can see the shadow of Venus cross Sun. The transit of Venus occurs in pairs eight years apart, with the previous one taking place in 2004. The next pair of transits occurs after 105.5 & 121.5 years later. The best place to watch the event would be a planetarium nearby with telescope facility. If not you watch it directly but must protect your eyes at all times with proper solar filters. Where can we see the transit? The transit of Venus is going to be clearly visible in Europe, Asia, United States and some part of Australia. Americans will be able to see transit in the evening of Tuesday, June 5, 2012. Eurasians and Africans can see the transit in the morning of June 6, 2012. At what time the event occurs? The principal events occurring during a transit are conveniently characterized by contacts, analogous to the contacts of an annular solar eclipse. The transit begins with contact I, the instant the planet’s disk is externally tangent to the Sun. Shortly after contact I, the planet can be seen as a small notch along the solar limb. The entire disk of the planet is first seen at contact II when the planet is internally tangent to the Sun. Over the course of several hours, the silhouetted planet slowly traverses the solar disk. At contact III, the planet reaches the opposite limb and once again is internally tangent to the Sun. Finally, the transit ends at contact IV when the planet’s limb is externally tangent to the Sun. Event Universal Time Contact I 22:09:38 Contact II 22:27:34 Greatest 01:29:36 Contact III 04:31:39 Contact IV 04:49:35   Transit of Venus animation Here is a nice video animation on the transit of Venus Map courtesy of Steven van Roode, source NASA

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Parallel Environment (PE) on Sun Grid Engine (6.2u5) won't run jobs: "only offers 0 slots"

    - by Peter Van Heusden
    I have Sun Grid Engine set up (version 6.2u5) on a Ubuntu 10.10 server with 8 cores. In order to be able to reserve multiple slots, I have a parallel environment (PE) set up like this: pe_name serial slots 999 user_lists NONE xuser_lists NONE start_proc_args /bin/true stop_proc_args /bin/true allocation_rule $pe_slots control_slaves FALSE job_is_first_task TRUE urgency_slots min accounting_summary FALSE This is associated with the all.q on the server in question (let's call the server A). However, when I submit a job that uses 4 threads with e.g. qsub -q all.q@A -pe serial 4 mycmd.sh, it never gets scheduled, and I get the following reasoning from qstat: cannot run in PE "serial" because it only offers 0 slots Why is SGE saying "serial" only offers 0 slots, since there are 8 slots available on the server I specified (server A)? The queue in question is configured thus (server names changed): qname all.q hostlist @allhosts seq_no 0 load_thresholds np_load_avg=1.75 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH INTERACTIVE ckpt_list NONE pe_list make orte serial rerun FALSE slots 1,[D=32],[C=8], \ [B=30],[A=8] tmpdir /tmp shell /bin/sh prolog NONE epilog NONE shell_start_mode posix_compliant starter_method NONE suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists NONE xuser_lists NONE subordinate_list NONE complex_values NONE projects NONE xprojects NONE calendar NONE initial_state default s_rt INFINITY h_rt 08:00:00 s_cpu INFINITY h_cpu INFINITY s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY,[A=30g], \ [B=5g]

    Read the article

  • Sun Grid Engine (SGE) / limiting simultaneous array job sub-tasks

    - by wfaulk
    I am installing a Sun Grid Engine environment and I have a scheduler limit that I can't quite figure out how to implement. My users will create array jobs that have hundreds of sub-tasks. I would like to be able to limit those jobs to only running a set number of tasks at the same time, independent of other jobs. Like I might have one array job that I want to run 20 tasks at a time, and another I want to run 50 tasks at a time, and yet another that I'm fine running without limit. It seems like this ought to be doable, but I can't figure it out. There's a max_aj_instances configuration option, but that appears to apply globally to all array jobs. I can't see any way to use consumable resources, as I'd need a "complex attribute" that is per-job, and that feature doesn't seem to exist. It didn't look like resource quotas would work, but now I'm not so sure of that. It says "A resource quota set defines a maximum resource quota for a particular job request", but it's unclear if an array job's sub-tasks' resource requests will be aggregated for the purposes of the resource quota. I'm going to play with this, but hopefully someone already knows outright.

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Getting no class def found error. Log4J -> Java

    - by Nitesh Panchal
    Hello, I created a simple web application and added log4J's jar in my lib folder and it seems to work fine. Then i created a Ejb module and did the same thing of adding jar file in my classpath, but i am getting this error :- Caused by: javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean: java.lang.NoClassDefFoundError: org/apache/log4j/Logger at com.sun.ejb.containers.BaseContainer.checkExceptionClientTx(BaseContainer.java:4929) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4761) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1955) ... 94 more Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Logger at common.Utils.getRssFeed(Utils.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.glassfish.ejb.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1052) at org.glassfish.ejb.security.application.EJBSecurityManager.invoke(EJBSecurityManager.java:1124) at com.sun.ejb.containers.BaseContainer.invokeBeanMethod(BaseContainer.java:5243) at com.sun.ejb.EjbInvocation.invokeBeanMethod(EjbInvocation.java:615) at com.sun.ejb.containers.interceptors.AroundInvokeChainImpl.invokeNext(InterceptorManager.java:797) at com.sun.ejb.EjbInvocation.proceed(EjbInvocation.java:567) at com.sun.ejb.containers.interceptors.SystemInterceptorProxy.doAround(SystemInterceptorProxy.java:157) at com.sun.ejb.containers.interceptors.SystemInterceptorProxy.aroundInvoke(SystemInterceptorProxy.java:139) at sun.reflect.GeneratedMethodAccessor102.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.ejb.containers.interceptors.AroundInvokeInterceptor.intercept(InterceptorManager.java:858) at com.sun.ejb.containers.interceptors.AroundInvokeChainImpl.invokeNext(InterceptorManager.java:797) at com.sun.ejb.containers.interceptors.InterceptorManager.intercept(InterceptorManager.java:367) at com.sun.ejb.containers.BaseContainer.__intercept(BaseContainer.java:5215) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:5203) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:190) ... 92 more I don't even have any other version of log4j in classpath. Any idea why is this error coming? Actually first i created a Enterprise application and everything seemed to work fine. But then, i copied files from my enterprise application's ejb module to a separate module and since then these errors are coming. This is in my build-impl.xml :- <target depends="compile" name="library-inclusion-in-archive"> <copyfiles files="${libs.Log4J.classpath}" todir="${build.classes.dir}"/> </target> <target depends="compile" name="library-inclusion-in-manifest"> <copyfiles files="${libs.Log4J.classpath}" todir="${dist.ear.dir}/lib"/> <manifest file="${build.ear.classes.dir}/META-INF/MANIFEST.MF" mode="update"/> </target> Don't know why is this pointing to ear instead of jar. Is there any problem with the above statement? How do i resolve this? I am using Netbeans 6.8 Thanks in advance :)

    Read the article

  • What's In Storage?

    - by [email protected]
    Oracle Flies South for Storage Networking Event Storage Networking World (now simply called SNW) is the place you'll find the most-comprehensive education on storage, infrastructure, and the datacenter in the spring of 2010. It's also the place where you'll see Oracle. During the April 12-15 event in Orlando, Florida, the industry's premiere presentations on storage trends and best practices are combined with hands-on labs covering storage management and IP storage. You'll also have the opportunity to learn about Oracle's Sun storage solutions, from Flash and open storage to enterprise disk and tape. Plus, if you stop by booth 207 in the expo hall, you might walk away with a bookish prize: an Amazon Kindle, courtesy of Oracle. Proving, once again, that education can be quite rewarding.

    Read the article

  • Oracle Desktop Virtualization Press Release

    - by [email protected]
    Even though Oracle has introduced new products (HW and SW) and pricing, and part numbers, modified licensing, and an EVP and an SVP have discussed openly where Oracle is going with Virtualization,  you may still have heard from "the other guys' that Oracle isn't going to be keeping the legacy Sun 'Desktop' portfolio.  I think that has soundly been addressed by the press release this morning.  Click here for the release.This is a great way to kick off Oracle's New (fiscal) Year.  As there are more announcements coming - I'll just say "Enjoy!, and "Stay Tuned".

    Read the article

  • What's So Smart About Oracle Exadata Smart Flash Cache?

    - by kimberly.billings
    Want to know what's so "smart" about Oracle Exadata Smart Flash Cache? This three minute video explains how Oracle Exadata Smart Flash Cache helps solve the random I/O bottleneck challenge and delivers extreme performance for consolidated database applications. Exadata Smart Flash Cache is a feature of the Sun Oracle Database Machine. With it, you get ten times faster I/O response time and use ten times fewer disks for business applications from Oracle and third-party providers. Read the whitepaper for more information. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • OPN SPECIALIZED Webcasts

    - by Claudia Costa
    OPN Specialized Webcast Series for Partners For the EMEA region the webcasts start at 11:00 CET/10:00GMT. Each training session will run for approximately one hour and include live Q&A. "How to become Specialized in the Applications products portfolio," 25th May 2010,11:00 CET/10:00GMT. Click here for more information& registration. "How to become an OPN Specialized Reseller of Oracle's Sun SPARC Servers, Storage, Software and Services," 1st June 2010,11:00 CET/10:00GMT. Click here for more information& registration.  

    Read the article

  • October 2012 Security "Critical Patch Update" (CPU) information and downloads released

    - by user12244672
    The October 2012 security "Critical Patch Update" information and downloads are now available from My Oracle Support (MOS). See http://www.oracle.com/technetwork/topics/security/alerts-086861.html and in particular Document 1475188.1 on My Oracle Support (MOS), http://support.oracle.com, which includes security CVE mappings for Oracle Sun products. For Solaris 11, Doc 1475188.1 points to the relevant SRUs containing the fixes for each issue.  SRU12.4 was released on the CPU date and contains the current cumulative security fixes for the Solaris 11 OS. For Solaris 10, we take a copy of the Recommended Solaris OS patchset containing the relevant security fixes and rename it as the October CPU patchset on MOS.  See link provided from Doc 1475188.1 Doc 1475188.1 also contains references for Firmware, etc., and links to other useful security documentation, including information on Userland/FOSS vulnerabilities and fixes in https://blogs.oracle.com/sunsecurity/

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Hibernate Query Language Problem

    - by Sarang
    Well, I have implemented a distinct query in hibernate. It returns me result. But, while casting the fields are getting interchanged. So, it generates casting error. What should be the solution? As an example, I do have database, "ProjectAssignment" that has three fields, aid, pid & userName. I want all distinct userName data from this table. I have applied query : select distinct userName, aid, pid from ProjectAssignment Whereas the ProjectAssignment.java file has the fields in sequence aid, pid & userName. Now, here the userName is first field in output. So, Casting is not getting possible. Also, query : select aid, pid, distinct userName from ProjectAssignment is not working. What is the proper query for the same ? Or what else the solution ? The code is as below : System Utilization Service Bean Method where I have to retrieve data : public List<ProjectAssignment> getProjectAssignments() { projectAssignments = ProjectAssignmentHelper.getAllResources(); //Here comes the error return projectAssignments; } ProjectAssignmentHelper from where I fetch Data : package com.hibernate; import java.util.List; import org.hibernate.Query; import org.hibernate.Session; public class ProjectAssignmentHelper { public static List<ProjectAssignment> getAllResources() { List<ProjectAssignment> projectMasters; Session session = HibernateUtil.getSessionFactory().openSession(); Query query = session.createQuery("select distinct aid, pid, userName from ProjectAssignment"); projectMasters = (List<ProjectAssignment>) query.list(); session.close(); return projectMasters; } } Hibernate Data Bean : package com.hibernate; public class ProjectAssignment implements java.io.Serializable { private short aid; private String pid; private String userName; public ProjectAssignment() { } public ProjectAssignment(short aid) { this.aid = aid; } public ProjectAssignment(short aid, String pid, String userName) { this.aid = aid; this.pid = pid; this.userName = userName; } public short getAid() { return this.aid; } public void setAid(short aid) { this.aid = aid; } public String getPid() { return this.pid; } public void setPid(String pid) { this.pid = pid; } public String getUserName() { return this.userName; } public void setUserName(String userName) { this.userName = userName; } } Error : For input string: "userName" java.lang.NumberFormatException: For input string: "userName" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:447) at java.lang.Integer.parseInt(Integer.java:497) at javax.el.ArrayELResolver.toInteger(ArrayELResolver.java:375) at javax.el.ArrayELResolver.getValue(ArrayELResolver.java:195) at javax.el.CompositeELResolver.getValue(CompositeELResolver.java:175) at com.sun.faces.el.FacesCompositeELResolver.getValue(FacesCompositeELResolver.java:72) at com.sun.el.parser.AstValue.getValue(AstValue.java:116) at com.sun.el.parser.AstValue.getValue(AstValue.java:163) at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:219) at com.sun.faces.facelets.el.TagValueExpression.getValue(TagValueExpression.java:102) at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:190) at javax.faces.component.ComponentStateHelper.eval(ComponentStateHelper.java:178) at javax.faces.component.UICommand.getValue(UICommand.java:218) at org.primefaces.component.commandlink.CommandLinkRenderer.encodeMarkup(CommandLinkRenderer.java:113) at org.primefaces.component.commandlink.CommandLinkRenderer.encodeEnd(CommandLinkRenderer.java:54) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:878) at org.primefaces.renderkit.CoreRenderer.renderChild(CoreRenderer.java:70) at org.primefaces.renderkit.CoreRenderer.renderChildren(CoreRenderer.java:54) at org.primefaces.component.datatable.DataTableRenderer.encodeTable(DataTableRenderer.java:525) at org.primefaces.component.datatable.DataTableRenderer.encodeMarkup(DataTableRenderer.java:407) at org.primefaces.component.datatable.DataTableRenderer.encodeEnd(DataTableRenderer.java:193) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:878) at org.primefaces.renderkit.CoreRenderer.renderChild(CoreRenderer.java:70) at org.primefaces.renderkit.CoreRenderer.renderChildren(CoreRenderer.java:54) at org.primefaces.component.tabview.TabViewRenderer.encodeContents(TabViewRenderer.java:198) at org.primefaces.component.tabview.TabViewRenderer.encodeMarkup(TabViewRenderer.java:130) at org.primefaces.component.tabview.TabViewRenderer.encodeEnd(TabViewRenderer.java:48) at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:878) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1620) at javax.faces.render.Renderer.encodeChildren(Renderer.java:168) at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:848) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1613) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1616) at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1616) at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:380) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:126) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:127) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:313) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:802) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:664) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:497) at org.apache.catalina.core.ApplicationDispatcher.doDispatch(ApplicationDispatcher.java:468) at org.apache.catalina.core.ApplicationDispatcher.dispatch(ApplicationDispatcher.java:364) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:314) at org.apache.jasper.runtime.PageContextImpl.forward(PageContextImpl.java:783) at org.apache.jsp.welcome_jsp._jspService(welcome_jsp.java from :59) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:109) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:406) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:483) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:373) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Avoid generating empty STDOUT and STDERR files with Sun Grid Engine (SGE) and array jobs

    - by vy32
    I am running array jobs with Sun Grid Engine (SGE). My carefully scripted array job workers generate no stdout and no stderr when they function properly. Unfortunately, SGE insists on creating an empty stdout and stderr file for each run. Sun's manual states: STDOUT and STDERR of array job tasks will be written into dif- ferent files with the default location .['e'|'o']'.' In order to change this default, the -e and -o options (see above) can be used together with the pseudo-environment-vari- ables $HOME, $USER, $JOB_ID, $JOB_NAME, $HOSTNAME, and $SGE_TASK_ID. Note, that you can use the output redirection to divert the out- put of all tasks into the same file, but the result of this is undefined. I would like to have the output files suppressed if they are empty. Is there any way to do this?

    Read the article

  • .apk signing fails even with Sun JDK (java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyP

    - by ianweller
    I'm having an interesting problem signing my Android application, whether or not I'm using a debug key. Regardless of the JDK I have installed to /usr/bin/{java,keytool,jarsigner} (OpenJDK or Sun's JDK) it will always give the following output after compiling successfully: -package-debug-sign: [apkbuilder] Creating RemoteNotify-debug-unaligned.apk and signing it with a debug key... BUILD FAILED /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:281: The following error occurred while executing this line: /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:152: java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyProvider The application was built and signed just fine by Eclipse with the ADT plugin (even without Sun's JDK installed). I'm on Fedora 12. I'm wanting to get my code out of Eclipse and move it into a git repository, but being unable to build it from ant will not allow this to happen.

    Read the article

  • do not use com.sun.xml.internal.*?

    - by sarah xia
    Hi all, Is this statement true: com.sun.xml.internal package is an internal package as the name suggestes. Users should not write code that depends on internal JDK implementation classes. Such classes are internal implementation details of the JDK and subject to change without notice One of my colleagues used one of the classes in his code, which caused javac task in Ant fail to compile our project as the compiler couldn't find the class. Answer from Sun/Oracle says that this is expected behavior of the compiler as user shouldn't use the package. Question is why the classes in the package made public in the first place? Thanks, Sarah

    Read the article

  • EL FUTURO DEL CLOUD, A DEBATE EN EL XX CONGRESO NACIONAL DE USUARIOS ORACLE

    - by comunicacion-es_es(at)oracle.com
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} ¡Vuelta a un mini Oracle OpenWorld! La Comunidad de Usuarios de Oracle celebrará en Madrid los próximos 16 y 17 de marzo su XX Congreso Nacional, donde estarán representadas TODAS las áreas de Oracle (aplicaciones, tecnología, hardware y canal). Bajo el lema "Agilidad, innovación y optimización del negocio", contaremos con prestigiosos ponentes internacionales como Massimo Pezzini, vicepresidente de Gartner; Rex Wang, experto en Cloud Computing y vicepresidente de marketing de producto de Oracle; y Janny Ekelson, director de aplicaciones y arquitectura FedEx Express Europa. A parte de los más de 15 casos de éxito, en las más de 40 presentaciones programadas, el Cloud Computing será uno de los temas estrella junto a la estrategia en hardware de Oracle tras la adquisición de Sun. ¡Os esperamos!

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • The Oracle EMEA Partner Event of the Year- FREE, LIVE & ONLINE!

    - by Claudia Costa
    New products. New specializations. New opportunities. Find out how you can use them to build your Oracle business even faster and more effectively in 2010/11. The date for your diary is the 29th of June 2010, at 11:00 GMT. And this summer's event is bigger and better than ever. You will learn: What Oracle's acquisition of Sun Microsystems means for your business and your customers How Oracle Specialization can help you grow faster and smarter, and how Oracle partners from across the region are already benefitting Why Oracle's latest technology, applications, middleware and hardware products and solutions offer you unbeatable new business opportunities How Oracle's partner program is evolving to help partners succeed with a live link to the Oracle FY11 Global Partner Kickoff How specialization has helped a former Microsoft executive become one of the world's most successful social entrepreneurs You'll also have the chance to network with Oracle experts and other partners, and download valuable collateral from specially constructed virtual information booths. Plus, at the end of the event, submit your feedback form for the chance to win two passes to Oracle OpenWorld in San Francisco this September! Don't miss out! REGISTER TODAY!  for this exciting, exclusive online event. Visit here for more information and to view the complete agenda We look forward to welcoming you on the 29th of June! Yours sincerely, Stein SurlienSenior Vice President, Alliances & Channels, Oracle EMEA PS. The Oracle PartnerNetwork Days Virtual Event will be followed by "Oracle PartnerNetwork Days Executive Forums", and "Oracle PartnerNetwork Days Satellite Events" in various countries. Please look out for further communications from your local Oracle team.

    Read the article

  • Installing Java 6 on Ubuntu 10.04 fails on missing Java 6 JRE package

    - by David S
    I'm trying to install Java 6 on Ubuntu 10.04 and it's been harder than it should be. In another question about installing Java on Ubuntu/Linux it said that I needed to do the following: sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner" However, that failed and I kept getting: sudo: add-apt-repository: command not found The solution to this, was to run: sudo apt-get install python-software-properties So, that seemed to work and the command above to "add-apt-repository" seems to complete with no errors. And I have run the following to confirm it got added. sudo vi /etc/apt/sources.list But, now when I run the following: sudo apt-get install sun-java6-jre I get: Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jre is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jre has no installation candidate Where do I go from here?

    Read the article

  • Actionscript - Dropping Multiple Objects Using an Array?

    - by Eratosthenes
    I'm trying to get these fireBalls to drop more often, im not sure if im using Math.random correctly also, for some reason I'm getting a null reference because I think the fireBalls array waits for one to leave the stage before dropping another one? this is the relevant code: var sun:Sun=new Sun var fireBalls:Array=new Array() var left:Boolean; function onEnterFrame(event:Event){ if (left) { sun.x = sun.x - 15; }else{ sun.x = sun.x + 15; } if (fireBalls.length>0&&fireBalls[0].y>stage.stageHeight){ // Fireballs exit stage removeChild(fireBalls[0]); fireBalls.shift(); } for (var j:int=0; j<fireBalls.length; j++){ fireBalls[j].y=fireBalls[j].y+15; if (fireBalls[j].y>stage.stageHeight-fireBall.width/2){ } } if (Math.random()<.2){ // Fireballs shooting from Sun var fireBall:FireBall=new FireBall; fireBall.x=sun.x; addChild(fireBall); fireBalls.push(fireBall); } }

    Read the article

  • Apache Tomcat Server Error

    - by Sam....
    I M trying to install Tomcat But Getting this Error Every Time ..whether it is binary or Exe install *SEVERE: Begin event threw exception java.lang.ClassNotFoundException: org.apache.catalina.core.AprLifecycleListener at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.commons.digester.ObjectCreateRule.begin(ObjectCreateRule.java:204) at org.apache.commons.digester.Rule.begin(Rule.java:152) at org.apache.commons.digester.Digester.startElement(Digester.java:1286) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.commons.digester.Digester.parse(Digester.java:1572) at org.apache.catalina.startup.Catalina.start(Catalina.java:451) at org.apache.catalina.startup.Catalina.execute(Catalina.java:402) at org.apache.catalina.startup.Catalina.process(Catalina.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:202) * Can any one please solve this...need urgent rly

    Read the article

  • Iphone Receiving Errors in Organizer Console

    - by user192124
    When attempting to use the app I have developed I am receiving the following errors: Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p3b): unknown register number 59 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p3c): unknown register number 60 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p3d): unknown register number 61 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p3e): unknown register number 62 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p3f): unknown register number 63 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p40): unknown register number 64 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p41): unknown register number 65 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p42): unknown register number 66 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p43): unknown register number 67 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p44): unknown register number 68 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p45): unknown register number 69 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p46): unknown register number 70 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p47): unknown register number 71 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p48): unknown register number 72 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p49): unknown register number 73 requested Sun Oct 18 17:49:38 unknown com.apple.debugserver-43[316] <Error>: error: RNBRemote::HandlePacket_p(p4a): unknown register number 74 requested Unfortunately I am not finding anything on google about RNBRemote or HandlePacket_p messages. Has anyone received anything like this before and what could be causing it? It crashes the app. Thank You

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >