Search Results

Search found 14467 results on 579 pages for 'rajeev y ranjan oracle'.

Page 518/579 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • JCP Open EC Meeting on 30 September 2012

    - by heathervc
    The JCP program office and Executive Committee invites all Java Community members to attend the OPEN EC Meeting on Sunday, 30 September at the Clift Hotel in San Francisco.  The meeting is adjacent to The Zone at JavaOne, but no JavaOne (or any other kind) of pass is required to attend.  It is OPEN to all!  Agenda topics include: JCP.Next status/overview of JSRs 355 and 358, improving communications between the EC and the community; Open Q&A and reminders of JCP events at JavaOne & Annual awards.  Any other suggestions?  This meeting is for you.  Let us know your questions pmo at jcp.org. Or bring them with you.  Details below. JCP Public Executive Committee Face-to-Face Meeting Open to Executive Committee Members and the Java Developer Community Location: Clift Hotel, 495 Geary Street, San Francisco - Rita Room (downstairs from Lobby) Date and Time: 9/30/12, 2:00 PM - 3:30 PM See you there.  Check out all of the JCP @ JavaOne events.

    Read the article

  • Nokia at JavaOne

    - by Tori Wieldt
    Nokia has long been a key partner for Java Mobile, and they continue investing significantly in Java technologies. Developers can learn more about Nokia's popular Asha phone and developer platform at JavaOne. In addition to interesting technical material, all Nokia sessions will include giveaways (hint: be engaged and ask questions!). Don't miss these great sessions: CON4925 The Right Platform with the Right Technology for Huge Markets with Many Opportunities CON11253 In-App Purchasing for Java ME Apps BOF4747 Look Again: Java ME's New Horizons of User Experience, Service Model, and Internet Innovation BOF12804 Reach the Next Billion with Engaging Apps: Nokia Asha Full Touch for Java ME Developers CON6664 on Mobile Java, Asha, Full Touch, Maps APIs, LWUIT, new UI, new APIs and more CON6494 Extreme Mobile Java Performance Tuning, User Experience, and Architecture BOF6556 Mobile Java App Innovation in Nigeria

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • Yet another GlassFish 3.1.1 promoted build

    - by alexismp
    Promoted build #9 for GlassFish 3.1.1 is available from the usual location. This is the "soft code freeze" build with only the "hard code freeze" build left before the release candidate. So if you have bugs you'd like to see fixed, voice your opinion *now*. As a quick reminder, GlassFish offers Web Profile or Full Platform distributions in ZIP or installer flavors (some more details in this blog post from last year but still relevant). If you've installed previous promoted builds or simply have the "dev" repository defined, then the Update Center will simply update the existing installed bits. In addition to the earlier update on 3.1.1 it's probably safe to say that this version was carefully designed to be highly compatible with the previous 3.x versions, thus leaving you with little reasons not to upgrade as soon as it comes out this summer.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • List all BPM Processes for a user

    - by kasriniv
    Hello, Happy to start contributing to this blog..  The title of the blog is probably deceptively simple and warrants an elaboration. Customized BPM workspaces/user interfaces are a fairly common requirement. One of our marquee customers in the online stock trading business, envisioned this user interaction for their BPM application: User logs in to the internal portal Use will have list of roles which he is granted as a drop down list Once user selects the role, a list of processes which user is part of appear. Logged in user can be part of any swimlane role of the process This can be a fairly common/reasonable user-UI interaction pattern. 1. and 2. are easily achievable and hence the subject matter of this blog is the requirement in 3. Objective: Given a username and a role, list all the BPM processes that the user is part of, in any swimlane of any process. Here is quick overview of the major steps/logic in the code: Intialize workflow/BPM  context as usual Get a handle on InstanceQueryService(getInstanceQueryService), InstanceManagementService,        ProcessMetadataService and ProcessModelService List all Processes for that bpmcontext (listProcessMetadataSumary) and get Granted roles to that user For each of the processes [method  getAccessibleProcesss(ProcessMetadataSummary, Set)]for each of the lanes in the process, check if the role granted to the user, matches the roleName for that swimlane. If so, add to output. Notes: The usual caveats apply including BPM APIs are subject to change.  JDeveloper method introspection is your better friend than API documentation :-)... (I am going to try upload the source code  and if it doesnt work, will follow this blog up with the corresponding source code.) Hope this helps.  Ack: Yogesh K, BPM Dev team.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 2)

    - by Geertjan
    Now let's create the start of a CRUD application on the NetBeans Platform, using Hibernate and PostgreSQL to do so. Here's what I see in NetBeans IDE after setting things up as outlined yesterday: The NetBeans Platform CRUD Tutorial should get you up and started creating the NetBeans Platform application. Open the generated "persistence.xml" in Design mode and then switch the persistence library to Hibernate. The Here's the application structure: The Hibernate module that you see above has this content: Here's the result: And here's the source code: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/NBPostgreSQL

    Read the article

  • Hospital fined $1m for Patient Data Breach

    - by martin.abrahams
    As an illustration of the potential cost of accidental breaches, the US Dept of Health and Human Services recently fined a hospital $1m for losing documents relating to some of its patients. Allegedly, the documents were left on the subway by a hospital employee. For incidents in the UK, several local government bodies have been fined between £60k and £100k. Evidently, the watchdogs are taking an increasingly firm position.

    Read the article

  • Improved Maven Embedded GlassFish - deploy multiple apps

    - by alexismp
    Bhavani has some new over at java.net about the Maven Plugin for GlassFish and how it now supports the ability to deploy multiple applications. He also has a Tips, Tricks and Troubleshooting entry. Multiple deployments are done during the Maven pre-integration-test phase but with a goal-specific configuration for app, contextRoot, etc... The :run (all-in-one) execution also now supports admin and deploy goals. Note that these improvements will require a recent work-in-progress 4.0 version of GlassFish.

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • Making the Move to PeopleSoft Projects (ESA) 9.1

    The PeopleSoft Projects (ESA) 9.1 release is built to enable the successful selection and execution of projects. With distinct attention to both user and IT productivity, PeopleSoft Projects 9.1 unveils a new Web 2.0 user interface, facilities to enable rapid business process re-configuration, and tools to strengthen project governance. Join us to hear more about these topics and the industry-specific functionality that are compelling customers to make the move to PeopleSoft Projects 9.1.

    Read the article

  • Simple HTML5 Friendly Markup Sample

    - by Geertjan
    From a demo done by David Heffelfinger (who has a great Java EE 7 screencast series here), on HTML5 friendly markup. index.xhtml:  <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:jsf="http://xmlns.jcp.org/jsf"> <title>Data Entry Page</title> <body> <form method="POST" jsf:id='form'> <table> <tr> <td>Name:</td> <td><input jsf:id='name' type="text" jsf:value="${person.name}" /></td> </tr> <tr> <td>City</td> <th><input jsf:id='city' type="text" jsf:value="${person.city}"/></th> </tr> <tr> <td><input type="submit" value="Submit" jsf:action="confirmation" /></td> </tr> </table> </form> </body> </html> confirmation.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Data Confirmation Page</title> </head> <body> <h1>#{person.name}</h1> from <h2>#{person.city}</h2> </body> </html> Person.java: package org.demo; import javax.enterprise.inject.Model; @Model public class Person { String name; String city; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } }

    Read the article

  • Adding an existing Control Domain to a Server Pool

    - by Owen Allen
    I got a question about LDoms: "Is it possible to move a Control Domain built through Ops Center with pre-existing LDoms into a server pool? If so, do I need to delete and recreate anything?" Yes, you can do this. You have to stop the LDom guests, and then you can add the CDom to a Server Pool. If the guests are using shared storage, you should be able to bring them up in the Server Pool. If the guests are not on shared storage, you can use the Migrate Storage option to bring their storage in.

    Read the article

  • Vattenfall Accelerates Projects and Cuts Costs with AutoVue Document Visualization

    Ringhals, a Swedish nuclear power plant, part of the Vattenfall Group, produces 20 percent of the country's electricity and is the largest power station in the Nordic region. Ringhals has standardized on AutoVue for most of their engineering and asset document visualization requirements throughout their plant maintenance, design and engineering operations. As a result, they have cut IT maintenance costs, increased productivity, and improved maintenance operations.

    Read the article

  • Petstore using Java EE 6 ? Almost!

    - by arungupta
    Antonio Goncalves, a Java Champion, JUG leader, and a well-known author, has started building a Petstore-like application using Java EE 6. The complete end-to-end sample application will build a eCommerce website and follows the Java EE 6 design principles of simple and easy-to-use to its core. Its using several technologies from the platform such as JPA 2.0, CDI 1.0, Bean Validation 1.0, EJB Lite 3.1, JSF 2.0, and JAX-RS 1.1. The two goals of the project are: • use Java EE 6 and just Java EE 6 : no external framework or dependency • make it simple : no complex business algorithm The application works with GlassFish and JBoss today and there are plans to add support for TomEE. Download the source code from github.com/agoncal/agoncal-application-petstore-ee6. And feel free to fork if you want to use a fancy toolkit as the front-end or show some nicer back-end integration. Some other sources of similar end-to-end applications are: • Java EE 6 Tutorial • Java EE 6 Galleria • Java EE 6 Hands-on Lab

    Read the article

  • JSR Updates and Inactive JSR ballots

    - by heathervc
    The following are JSRs have posted updates in the last week: JSR 331, Constraint Programming API, has posted a Maintenance Draft Review; this review closes 29 September. JSR 352, Batch Applications for the Java Platform, has posted an Early Draft Review; this review closes 29 September. JSR 353, Java API for JSON Processing, has posted an Early Draft Review; this review closes 7 October. Inactive JSRs: The following JSR proposals have been Inactive for at least two years and are currently on the EC ballot to be declared Dormant, following a period where the community was given an opportunity to express interest in their continued development: JSR 50, Distributed Real-Time Specification JSR 282, Real-Time Specification for Java (RTSJ) 1.1 JSR 307, Network Mobility and Mobile Data API JSR 327, Dynamic Contents Delivery Service API for Java ME JSR 328, Change Management API

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Silicon Valley Code Camp 2012 - Submit Your Talks

    - by arungupta
    Silicon Valley Code Camp follows three rules: Given by/for the community Always free Never occur during work hours I've spoken there at 2011, 2010, 2009, 2008, and 2007 and have again submitted a talk this year as well, and will submit more! Its one of the best organically grown code camps with the attendance constantly growing over the past 6 years. Here is a chart that shows how the number of conferences attendees that registered and attended and the sessions delivered over past 6 years. If you wonder why there is such a big gap between "registered" and "attended" that's because this event is FREE! Yes, 100% free. If you are in and around Silicon Valley then you have no reason to not participate/speak at SVCC. You have the opportunity to meet all the local JUG leaders and the community "rockstars" :-) Date: Oct 6/7, 2012 Venue: Foothill College, 12345, El Monte Road, Los Altos Hills, CA Submit today or register!

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >