Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 522/739 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • Adding an existing Control Domain to a Server Pool

    - by Owen Allen
    I got a question about LDoms: "Is it possible to move a Control Domain built through Ops Center with pre-existing LDoms into a server pool? If so, do I need to delete and recreate anything?" Yes, you can do this. You have to stop the LDom guests, and then you can add the CDom to a Server Pool. If the guests are using shared storage, you should be able to bring them up in the Server Pool. If the guests are not on shared storage, you can use the Migrate Storage option to bring their storage in.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • HTML, JavaScript, and CSS in a NetBeans Platform Application

    - by Geertjan
    I broke down the code I used yesterday, to its absolute bare minimum, and then realized I'm not using HTML 5 at all: <html> <head> <link rel="stylesheet" href="style.css" type="text/css" media="all" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script> <script type="text/javascript" src="script.js"></script> </head> <body> <div id="logo"> </div> <div id="infobox"> <h2 id="statustext"/> </div> </body> </html> Here's the script.js file referred to above: $(function(){ var banana = $("#logo"); var statustext = $("#statustext"); var defaulttxt = "Drag the banana!"; var dragtxt = "Dragging the banana!"; statustext.text(defaulttxt); banana.draggable({ drag: function(event, ui){ statustext.text(dragtxt); }, stop: function(event, ui){ statustext.text(defaulttxt); } }); }); And here's the stylesheet: body { background:#3B4D61 repeat 0 0; margin:0; padding:0; } h2 { color:#D1D8DF; display:block; font:bold 15px/10px Tahoma, Helvetica, Arial, Sans-Serif; text-align:center; } #infobox { position:absolute; width:300px; bottom:20px; left:50%; margin-left:-150px; padding:0 20px; background:rgba(0,0,0,0.5); -webkit-border-radius:15px; -moz-border-radius:15px; border-radius:15px; z-index:999; } #logo { position:absolute; width:450px; height:150px; top:40%; left: 30%; background:url(bananas.png) no-repeat 0 0; cursor:move; z-index:700; } However, I've replaced the content of the HTML file with a few of the samples from here, without any problem; in other words, if the HTML 5 canvas were to be needed, it could seamlessly be incorporated into my NetBeans Platform application: https://developer.mozilla.org/en/Canvas_tutorial/Basic_usage

    Read the article

  • MultiSelectChoice: How to get underlying values selected

    - by Vijay Mohan
    Let's say you include a multiselectchoice component in your jspx/jsff page, which has <f;selectItem> or <af:forEach> binded to a VO iterator to populate the multiselectchoice and the value property of which is binded to a List attribute binding.When the user selects some items in that choice List then u want the actual values to be posted.You can check the valuepassthrough flag to true , but many a times it doesn't help and you end up getting the indexes of multiselect values.Here is a way to get the actual values..Lets say in the bean u have a utility method to achieve this as follows..You can associate a valueChangeListener for the multiselectchoice as follows..public void onValueChangeOfLOV(ValueChangeEvent valueChangeEvent) { //get array of indexes of selected items in master list List valueIndexes = (List)valueChangeEvent.getNewValue(); String concatCodes = returnSelectmanyChoiceValues(valueIndexes,"YourIterator", "YourAttribute"); } public String returnSelectmanyChoiceValues(List valueIndexes,String iterName, String idAttrName){ DCBindingContainer dc = (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry(); DCIteratorBinding iter = dc.findIteratorBinding(iterName); ViewObject vo = iter.getViewObject(); String codes = ""; for(Object index : valueIndexes){ String iIndex = (String)index; Row row = vo.getRowAtRangeIndex(Integer.parseInt(iIndex)); codes = codes +(String)row.getAttribute(idAttrName)+","; } //remove last "," if(codes.endsWith(",")) codes = codes.substring(0,codes.lastIndexOf(",")); return codes; }This will return u a comma separated values of the selected items. if you want thenYou can store it in a List.

    Read the article

  • Yet another GlassFish 3.1.1 promoted build

    - by alexismp
    Promoted build #9 for GlassFish 3.1.1 is available from the usual location. This is the "soft code freeze" build with only the "hard code freeze" build left before the release candidate. So if you have bugs you'd like to see fixed, voice your opinion *now*. As a quick reminder, GlassFish offers Web Profile or Full Platform distributions in ZIP or installer flavors (some more details in this blog post from last year but still relevant). If you've installed previous promoted builds or simply have the "dev" repository defined, then the Update Center will simply update the existing installed bits. In addition to the earlier update on 3.1.1 it's probably safe to say that this version was carefully designed to be highly compatible with the previous 3.x versions, thus leaving you with little reasons not to upgrade as soon as it comes out this summer.

    Read the article

  • Inactive JSRs looking for Spec Leads

    - by heathervc
    You may have noticed that some JSRs have a classification of "Inactive" on their JSR page.  The introduction of this term in 2009 was part of an effort to enable and encourage more transparency into the development of JSRs.  You can read more about Inactive JSRs here and also in the JCP FAQ.The following JSR proposals have been Inactive since at least 2009. If you are a JCP Member and are interested in taking over the Specification Lead role for one of these JSRs, please contact the PMO at [email protected] on or before 23 April 2012. With that message, please include the following: the subject line "Spec Lead for JSR ###," where '###' is the JSR number which JCP Member you represent why you wish to take over the Specification lead role Here is the current list of Inactive JSRs for which Members can request to become Specification Leads: JSR 122, JAIN JCAT JSR 161, JAIN ENUM API Specification JSR 182, JPay - Payment API for the Java Platform JSR 210, OSS Service Quality Management API JSR 241, The Groovy Programming Language JSR 251, Pricing API JSR 278, Resource Management API for Java ME JSR 304, Mobile Telephony API v2 JSR 305, Annotations for Software Defect Detection JSR 320, Services Framework

    Read the article

  • List all BPM Processes for a user

    - by kasriniv
    Hello, Happy to start contributing to this blog..  The title of the blog is probably deceptively simple and warrants an elaboration. Customized BPM workspaces/user interfaces are a fairly common requirement. One of our marquee customers in the online stock trading business, envisioned this user interaction for their BPM application: User logs in to the internal portal Use will have list of roles which he is granted as a drop down list Once user selects the role, a list of processes which user is part of appear. Logged in user can be part of any swimlane role of the process This can be a fairly common/reasonable user-UI interaction pattern. 1. and 2. are easily achievable and hence the subject matter of this blog is the requirement in 3. Objective: Given a username and a role, list all the BPM processes that the user is part of, in any swimlane of any process. Here is quick overview of the major steps/logic in the code: Intialize workflow/BPM  context as usual Get a handle on InstanceQueryService(getInstanceQueryService), InstanceManagementService,        ProcessMetadataService and ProcessModelService List all Processes for that bpmcontext (listProcessMetadataSumary) and get Granted roles to that user For each of the processes [method  getAccessibleProcesss(ProcessMetadataSummary, Set)]for each of the lanes in the process, check if the role granted to the user, matches the roleName for that swimlane. If so, add to output. Notes: The usual caveats apply including BPM APIs are subject to change.  JDeveloper method introspection is your better friend than API documentation :-)... (I am going to try upload the source code  and if it doesnt work, will follow this blog up with the corresponding source code.) Hope this helps.  Ack: Yogesh K, BPM Dev team.

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • JSR Updates and Inactive JSR ballots

    - by heathervc
    The following are JSRs have posted updates in the last week: JSR 331, Constraint Programming API, has posted a Maintenance Draft Review; this review closes 29 September. JSR 352, Batch Applications for the Java Platform, has posted an Early Draft Review; this review closes 29 September. JSR 353, Java API for JSON Processing, has posted an Early Draft Review; this review closes 7 October. Inactive JSRs: The following JSR proposals have been Inactive for at least two years and are currently on the EC ballot to be declared Dormant, following a period where the community was given an opportunity to express interest in their continued development: JSR 50, Distributed Real-Time Specification JSR 282, Real-Time Specification for Java (RTSJ) 1.1 JSR 307, Network Mobility and Mobile Data API JSR 327, Dynamic Contents Delivery Service API for Java ME JSR 328, Change Management API

    Read the article

  • Improved Maven Embedded GlassFish - deploy multiple apps

    - by alexismp
    Bhavani has some new over at java.net about the Maven Plugin for GlassFish and how it now supports the ability to deploy multiple applications. He also has a Tips, Tricks and Troubleshooting entry. Multiple deployments are done during the Maven pre-integration-test phase but with a goal-specific configuration for app, contextRoot, etc... The :run (all-in-one) execution also now supports admin and deploy goals. Note that these improvements will require a recent work-in-progress 4.0 version of GlassFish.

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • Vattenfall Accelerates Projects and Cuts Costs with AutoVue Document Visualization

    Ringhals, a Swedish nuclear power plant, part of the Vattenfall Group, produces 20 percent of the country's electricity and is the largest power station in the Nordic region. Ringhals has standardized on AutoVue for most of their engineering and asset document visualization requirements throughout their plant maintenance, design and engineering operations. As a result, they have cut IT maintenance costs, increased productivity, and improved maintenance operations.

    Read the article

  • Java Embedded @ JavaOne Toolkit

    - by Tori Wieldt
    Java Embedded @ JavaOne provides business decision makers, technical leaders, and ecosystem partners information about Java Embedded technologies and new business opportunities.  From the enterprise business world to the consumer arena, smart meters, automated buildings, and context-aware medical devices can provide information that drive value for businesses and consumers. Java Embedded @ JavaOne will held Wednesday, Oct. 3th and Thursday, Oct. 4th in San Francisco at the Hotel Nikko (during JavaOne). If you have already registered, you can use the Java Embedded @ JavaOne Toolkit to let people know you are attending, to enhanced your blog, and to generate awareness, enthusiasm, and participation. There are banners and buttons, a list of High-Level Benefits of Attending Java Embbeded @ JavaOne, Sample E-Mail Copy, and more. There is also a Toolkit for Partners, Sponsors and Exhibitors. Check out the Java Embbed @ JavaOne Toolkits!

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • Fosfor EPUB Reader

    - by Geertjan
    Instead of creating a fullblown NetBeans Platform application for doing WYSIWYG editing for EPUB, similar to Sigil, I decided to focus purely on the very narrow scope of EPUB reading. The scope is narrower and, since the application will be a lot less ambitious and smaller, a pure JavaFX implementation makes sense. When you somehow get, e.g., buy, an EPUB file, you typically read it on a tablet or mobile device. However, some people in the world, e.g., me, still have laptops. Therefore, I'm creating a small JavaFX application that unzips EPUB files, into a temp directory, and then loads them into a JavaFX WebView. Arabic support: For an application like this, simplicity is the most important thing. Very few buttons, very few options, preferably no configuration of anything. Just let the user open the EPUB file and read it, that's it, nothing fancy. CSS stylesheets and images are correctly read. It's exactly what it looks like, a reader for EPUB files. The back and forward buttons are working and you can also switch to the table of contents. When it is complete, which it pretty much is right now, publishers of EPUB files can make this small app available from their site, to simplify life for their readers, since it will run easily and well on all operating systems.

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Essbase Analytics Link (EAL) - Performance of some operation of EAL could be improved by tuning of EAL Data Synchronization Server (DSS) parameters

    - by Ahmed Awan
    Generally, performance of some operation of EAL (Essbase Analytics Link) could be improved by tuning of EAL Data Synchronization Server (DSS) parameters. a. Expected that DSS machine will be 64-bit machine with 4-8 cores and 5-8 GB of RAM dedicated to DSS. b. To change DSS configuration - open EAL Configuration Tool on DSS machine.     ->Next:     and define: "Job Units" as <Number of Cores dedicated to DSS> * 1.5 "Max Memory Size" (if this is 64-bit machine) - ~1G for each Job Unit. If DSS machine is 32-bit - max memory size is 2600 MB. "Data Store Size" - depends on number of bridges and volume of HFM applications, but in most cases 50000 MB is enough. This volume should be available in defined "Data Store Dir" driver.   Continue with configuration and finish it. After that, DSS should be restarted to take new definitions.  

    Read the article

  • JEditorPane on Steroids with Nashorn

    - by Geertjan
    Continuing from Embedded Nashorn in JEditorPane, here is the same JEditorPane on steroids with Nashorn, in the context of some kind of CMS backend system: Above, you see heavy reusage of NetBeans IDE editor infrastructure. Parts of it are with thanks to Steven Yi, who has done some great research in this area. Code completion, right-click popup menu, line numbering, editor toolbar, find/replace features, block selection, comment/uncomment features, etc, etc, etc, all the rich editor features from NetBeans IDE are there, within a plain old JEditorPane. And everything is externally extensible, e.g., new actions can be registered by external modules into the right-click popup menu or the editor toolbar or the sidebar, etc. For example, here's code completion (Ctrl-Space): It even has the cool new feature where if you select a closing brace and the opening brace isn't in the visible area, a rectangular popup appears at the top of the editor, to show how the current piece of code begins: The only thing I am missing is code folding! I wish that would work too, still figuring it out. What's also cool is that this is a Maven project. The sources: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/CMSBackOffice2

    Read the article

  • Exiting a reboot loop

    - by user12617035
    If you're in a situation where the system is panic'ing during boot, you can use # boot net -s to regain control of your system. In my case, I'd added some diagnostic code to a (PCI) driver (that is used to boot the root filesystem). There was a bug in the driver, and each time during boot, the bug occurred, and so caused the system to panic: ... 000000000180b950 genunix:vfs_mountroot+60 (800, 200, 0, 185d400, 1883000, 18aec00) %l0-3: 0000000000001770 0000000000000640 0000000001814000 00000000000008fc %l4-7: 0000000001833c00 00000000018b1000 0000000000000600 0000000000000200 000000000180ba10 genunix:main+98 (18141a0, 1013800, 18362c0, 18ab800, 180e000, 1814000) %l0-3: 0000000070002000 0000000000000001 000000000180c000 000000000180e000 %l4-7: 0000000000000001 0000000001074800 0000000000000060 0000000000000000 skipping system dump - no dump device configured rebooting... If you're logged in via the console, you can send a BREAK sequence in order to gain control of the firmware's (OBP's) prompt. Enter Ctrl-Shift-[ in order to get the TELNET prompt. Once telnet has control, enter this: telnet> send brk You'll be presented with OBP's prompt: ok You then enter the following in order to boot into single-user mode via the network: ok boot net -s Note that booting from the network under Solaris will implicitly cause the system to be INSTALLED with whatever software had last been configured to be installed. However, we are using boot net -s as a "handle" with which to get at the Solaris prompt. Once at that prompt, we can perform actions as root that will let us back out our buggy driver (ok... MY buggy driver :-)) ...and replace it with the original, non-buggy driver. Entering the boot command caused the following output, as well as left us at the Solaris prompt (in single-user-mode): Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #53463393. Ethernet address 0:3:ba:2f:c9:61, Host ID: 832fc961. Rebooting with command: boot net -s Boot device: /pci@1f,700000/network@2 File and args: -s 1000 Mbps FDX Link up Timeout waiting for ARP/RARP packet Timeout waiting for ARP/RARP packet 4000 1000 Mbps FDX Link up Requesting Internet address for 0:3:ba:2f:c9:61 SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface bge0... Configured interface bge0 Requesting System Maintenance Mode SINGLE USER MODE # Our goal is to now move to the directory containing the buggy driver and replace it with the original driver (that we had saved away before ever loading our buggy driver! :-) However, since we booted from the network, the root filesystem ("/") is NOT mounted on one of our local disks. It is mounted on an NFS filesystem exported by our install server. To verify this, enter the following command: # mount | head -1 / on my-server:/export/install/media/s10u2/solarisdvd.s10s_u2dvd/latest/Solaris_10/Tools/Boot remote/read/write/setuid/devices/dev=4ac0001 on Wed Dec 31 16:00:00 1969 As a result, we have to create a temporary mount point and then mount the local disk onto that mount point: # mkdir /tmp/mnt # mount /dev/dsk/c0t0d0s0 /tmp/mnt Note that your system will not necessarily have had its root filesystem on "c0t0d0s0". This is something that you should also have recorded before you ever loaded your.. er... "my" buggy driver! :-) One can find the local disk mounted under the root filesystem by entering: # df -k / Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 76703839 4035535 71901266 6% / To continue with our example, we can now move to the directory of buggy-driver in order to replace it with the original driver. Note that /tmp/mnt is prefixed to the path of where we'd "normally" find the driver: # cd /tmp/mnt/platform/sun4u/kernel/drv/sparcv9 # ls -l pci\* -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch.aar -rwxr-xr-x 1 root sys 211616 Jun 8 2006 pcisch.orig # cp -p pcisch.orig pcisch We can now synchronize any in-memory filesystem data structures with those on disk... and then reboot. The system will then boot correctly... as expected: # sync;sync # reboot syncing file systems... done Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #xxxxxxxx. Ethernet address 0:3:ba:2f:c9:61, Host ID: yyyyyyyy. Rebooting with command: boot Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-host NIS domain name is my-campus.Central.Sun.COM my-host console login: ...so that's how it's done! Of course, the easier way is to never write a buggy-driver... but.. then.. we all "have an eraser on the end of each of our pencils"... don't we ? :-) "...thank you... and good night..."

    Read the article

  • Brain Teaser: How Did I Do This (Part 1: The Solution)

    - by Geertjan
    In Part 1: The Challenge, published this time last week, I introduced a "brain teaser". The brain teaser asks you to figure out how to allow images and other files to be meaningfully dropped onto a NetBeans Platform application, i.e., on the drop something useful should happen with the dropped file: if the file is an image, the image should open in the IDE; if the file is a PDF document, the PDF viewer should open externally; if the file is a text file, it should open as a text in the IDE, etc. Solution. And here is the solution: http://bits.netbeans.org/dev/javadoc/org-openide-windows/org/openide/windows/ExternalDropHandler.html When an implementation of the "ExternalDropHandler" class is available in the global Lookup, and an object is being dragged over some part of the main window, the window system may call the methods of this class to decide whether it can accept or reject the drag operation. And when the object is actually dropped, this class will be asked to handle the drop. OK, so go ahead and implement the above class and put it into the Lookup. Or... guess what? The NetBeans Platform has a default implementation of the above class, appropriately named "DefaultExternalDropHandler". Not only is this useful to learn about how to implement the ExternalDropHandler class (i.e., by reading the source here): you can simply include the module that contains this class in your own NetBeans Platform application and then your application will be able to receive external drag/drop events and do something meaningful with them thanks to the DefaultExternalDropHandler. Do this: Open your NetBeans Platform application in NetBeans IDE. Right-click the application in the Projects window and choose Properties. In the Libraries tab, expand the "ide" cluster, and select "User Utilities". (That's where "DefaultExternalDropHandler.java" is found and registered in the Lookup.) Now click the "Resolve" button, if it appears, because some additional related modules need to now be included, if they haven't been included yet. Again in the "ide" cluster in the Libraries tab, select "Image". That's the Image Editor. Click OK. Run the application. Drag an image or some other type of file into your application, from outside the application, and you'll see the application tries to handle the drop. If the file being dragged is an image, it will open in the Image Editor, which you included in the previous step of these instructions. Hurray, you're done. Without any programming at all, you've added a cool new feature to your application.

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    As you know, the Java EE 7 EG recently posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. Over 1,100 developers took time out of their busy lives to let their voices be heard! The results were just posted to the Java EE EG. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. In addition, many thanks to independent Java EE 7 expert group member Markus Eisele for blogging the results. You can read more details about the results here.

    Read the article

  • The Real Value Of Certification

    - by Brandye Barrington
    I read a quote recently by Rich Hein of CIO.com "Certifications are, like most things in life: The more you put into them, the more you will get out." This is what we tell candidates all the time. The real value in obtaining a certification is the time spent preparing for the exam. All the hours spent reading books, practicing in hands-on environments, asking questions and searching for answers is valuable. It's valuable preparation for the exam, but it's also valuable preparation for your future job role and for your career. If your goal is just to pass an exam, you've missed a very important part of the value of certification.We receive so many questions through different forms of social media on whether or not certification will help candidates get jobs or get better jobs. Surveys conducted by us and by independent entities all point to the job and salary benefits of certification. However, a key part of that equation is whether a candidate can actually perform successfully in a job role. If preparation time was used to practice and learn and master new skills rather than to memorize a brain dump, the candidate will probably perform successfully in their job role, and job opportunities and higher salary will likely follow. Candidates who do not show that initiative, will not likely reap the full benefits of certification.Keep this in mind as you approach your next certification exam. You are preparing for a career, not an exam. This may help you to be more appreciative of the long hours spent studying!

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >