Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 515/739 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • GeoToolkit Demo Embedded in an Application Framework via Maven

    - by Geertjan
    As a follow on to yesterday's blog entry, here's the equivalent starter application for GeoToolkit (also known as Geotk) on the NetBeans Platform, which ends up looking like this: The above is a border.shp file I found on-line, while here's a USA states shape file rendered in the application: Note that the navigation bar is also included, though that could later be migrated into the menu bar of the NetBeans Platform.  Download the Maven based NetBeans Platform application with GeoToolkit integration here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem It was quite tricky getting this sample together, parts of it, especially the installer, which creates the database, comes from the Puzzle GIS project, while the files come from on-line locations, with the JAI-related dependencies providing problems of their own. But it's definitely a starting point and you now have the basic Maven structure needed for getting started with GeoToolkit in the context of all the services and components provided by the NetBeans Platform.  Many thanks to Johann Sorel for his patience and help. 

    Read the article

  • Coherence Webcast for Developers July 11

    - by jeckels
    Coming on July 11th, we look forward to having you join us for a special Coherence webcast - just for developers! Want to learn how you, the developer, can make applications Big Data and Fast data ready? Want to be able to customize and manage your applications and services to provide real-time data and processing with ease? Then this webcast is for you. Coherence Live Webcast Developers: Deploy Highly-Available Custom Services on Your Data Grid Products July 11, 10am Pacific Time >> Register now! <<  (of course, it's free)Join Brian Oliver of the Coherence team to see how you can create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences. We look forward to having you join us.

    Read the article

  • JavaOne 2012 Java Jungle Session!

    - by HecklerMark
    Well, it's official - the proposal I submitted to JavaOne 2012 was accepted! Pending management approval, I'll be leading the following session: Session ID: CON3519 Session Title: Building Hybrid Cloud Apps: Local Databases + The Cloud = Extreme Versatility If you've been struggling with ways to "move to the cloud" without losing the advantages you currently enjoy/require in your current environment, I hope you'll consider signing up for this session. Hope to see you there! Mark

    Read the article

  • NetBeans 7.2 RC1 is published

    - by Ondrej Brejla
    NetBeans 7.2 RC1 was today published. You can download it here. You could read about the PHP features added to the NetBeans 7.2 release here on the blog, but the main features added or improved are: Support for PHP 5.4 PHP editing: Fix Uses action, annotations support, editing of Neon and Apache Config files and more Support for Symfony2, Doctrine2 and ApiGen frameworks FTP remote synchronization Support for running PHP projects on Hudson For more information, just look at New and Noteworthy page for NetBeans 7.2. And as obvious you can help us to test the build. Just try it and if you find an issue / error, please report it. Thanks for your help.

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values

    - by user702295
    Do you know what these values are telling you? COUNT(*) PREDICTION_STATUS DO_FORE DO_AGGRI AGGRI_98 AGGRI_99 LEVEL_ID 19854 99 1 1 1 1 3 1077 99 0 1 1 1 0 262691 99 1 1 -1 56 99 0 1 1 1 2 1 98 1 1 1 1 1 99 0 1 1 1 748796 1 1 1 4 351633 1 1 1 1 1 2 1877829 97 1 1 4 840 99 1 1 1 1 27 99 0 1 1 1 3 1 97 1 1 -1 66712 99 1 1 1 1 2 53213 1 1 1 1 1 3 2560 98 1 1 4   Check out The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1) This customer is adding an additional processing burden, adding no value.  The incoming data should be scrubbed to eliminate the overhead. 

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Java SE 8 (with JavaFX) Developer Preview Release for ARM

    - by Roger Brinkley
    In an effort to get ARM developers testing Java SE 8 before the scheduled release later this year a Java SE 8 Developer Preview Release for ARM has been made available. This release has been tested on the Raspberry PI but should work on other ARM platforms. In addition to the new Java SE features, this release provides specific support of hard float GPU on the Raspberry PI. The support for hard float GPU has been anticipated by a number of developers. Additionally, this release includes support of an optimized JavaFX. Specific configurations of JDK 8 on ARM are defined below: Java FX is supported on ARM architecture v6/7 (hard float) Supported platforms without Java FX: ARM architecture v6/7 (hard float) ARM architecture v7 (VFP, little endian) ARM architecture v5 (soft float, little endian) Linux x86 The download page includes setup instructions for a Raspberry PI device as well as demos and samples. Developers are also encouraged to try their own applications as well and to share their stories via the JavaFX or Project Feedback Forums.  If you've got a Raspberry PI or other ARM devices it's time to get started with Java SE 8 Developer Preview release.

    Read the article

  • Amazon Kindle e-Ink based device programming: Java ME CDC old school

    - by hinkmond
    If you like doing Amazon Kindle development in the old-school way (Java ME CDC-based apps) on their e-Ink based readers, then here's how to download and use the Amazon Kindle Development Kit (KDK). See: Download Amazon KDK Here's a quote: We're excited to introduce the all- new Kindle family: Kindle, Kindle Touch, and [blah-blah]. The KDK has APIs, tools, and documentation to help you create active content for Kindle, Kindle Touch, and other E Ink Kindles. Kickin' old school with Java ME CDC technology is the way to go. You can come up with the next Word with Friends this way. Hinkmond

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • NetBeans 7.3 Beta2 is Out!

    - by Ondrej Brejla
    NetBeans 7.3 Beta2 was published today. You can download it. You could read about the PHP features added to the NetBeans 7.3 release here on the blog, but the main features added or improved are: Parsers for Namespaced Annotations (Symfony 2, Doctrine 2, etc.), Basic Composer Integration (Dependency Manager for PHP), Twig Code Completion (with documentation), Smarty Braces Matching for Related Tags, Smarty Parser Errors of Unmatched Tags. As obvious you can help us to test the build. Just try it and if you find an issue / error, please report it. Thanks for your help.

    Read the article

  • Mixing JavaFX, HTML 5, and Bananas with the NetBeans Platform

    - by Geertjan
    The banana in the image below can be dragged. Whenever the banana is dropped, the current date is added to the viewer: What's interesting is that the banana, and the viewer that contains it, is defined in HTML 5, with the help of a JavaScript and CSS file. The HTML 5 file is embedded within the JavaFX browser, while the JavaFX browser is embedded within a NetBeans TopComponent class. The only really interesting thing is how drop events of the banana, which is defined within JavaScript, are communicated back into the Java class. Here's how, i.e., in the Java class, parse the HTML's DOM tree to locate the node of interest and then set a listener on it. (In this particular case, the event listener adds the current date to the InstanceContent which is in the Lookup.) Here's the crucial bit of code: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); final WebEngine webengine = view.getEngine(); URL url = getClass().getResource("home.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); EventTarget banana = (EventTarget) document.getElementById("banana"); banana.addEventListener("click", new MyEventListener(), true); } } }); It seems very weird to me that I need to specify "click" as a string. I actually wanted the drop event, but couldn't figure out what the arbitrary string was for that. Which is exactly why strings suck in this context. Many thanks to Martin Kavuma from the Technical University of Eindhoven, who I met today and who inspired me to go down this interesting trail.

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 1)

    - by Geertjan
    While setting up PostgreSQL from scratch, with the aim to use it in NetBeans IDE, I found the following resources helpful: http://railskey.wordpress.com/2012/05/19/postgresql-installation-in-ubuntu-12-04/ http://ohdevon.wordpress.com/2011/09/17/postgresql-to-netbeans-1/ http://ohdevon.wordpress.com/2011/09/19/postgresql-to-netbeans-2/ For quite a while I had problems relating to  "/var/run/postgresql/.s.PGSQL.5432", which had something to do with "postmaster.pid", which I somehow solved via a link I can't find anymore, and which may not have been a problem to begin with. A key moment was this one, which was useful for setting the password of a new user I'd created: http://stackoverflow.com/questions/7695962/postgresql-password-authentication-failed-for-user-postgres This was useful for setting up a table in my database, which I did by pasting in the below into NetBeans after I made the connection there: http://use-the-index-luke.com/sql/example-schema/postgresql/where-clause Now I have a database set up with all permissions everywhere (which turned out to be the hard part) correct: The next step will be to create a NetBeans Platform application based on this database. I'm assuming it shouldn't be any different to what's described in the NetBeans Platform CRUD Tutorial.

    Read the article

  • Controlling the Sizing of the af:messages Dialog

    - by Duncan Mills
    Over the last day or so a small change in behaviour between 11.1.2.n releases of ADF and earlier versions has come to my attention. This has concerned the default sizing of the dialog that the framework automatically generates to handle the display of JSF messages being handled by the <af:messages> component. Unlike a normal popup, you don't have a physical <af:dialog> or <af:window> to set the sizing on in your page definition, so you're at the mercy of what the framework provides. In this case the framework now defines a fixed 250x250 pixel content area dialog for these messages, which can look a bit weird if the message is either very short, or very long. Unfortunately this is not something that you can control through the skin, instead you have to be a little more creative. Here's the solution I've come up with.  Unfortunately, I've not found a supportable way to reset the dialog so as to say  just size yourself based on your contents, it is actually possible to do this by tweaking the correct DOM objects, but I wanted to start with a mostly supportable solution that only uses the best practice of working through the ADF client side APIs. The Technique The basic approach I've taken is really very simple.  The af:messages dialog is just a normal richDialog object, it just happens to be one that is pre-defined for you with a particular known name "msgDlg" (which hopefully won't change). Knowing this, you can call the accepted APIs to control the content width and height of that dialog, as our meerkat friends would say, "simples" 1 The JavaScript For this example I've defined three JavaScript functions.   The first does all the hard work and is designed to be called from server side Java or from a page load event to set the default. The second is a utility function used by the first to validate the values you're about to use for height and width. The final function is one that can be called from the page load event to set an initial default sizing if that's all you need to do. Function resizeDefaultMessageDialog() /**  * Function that actually resets the default message dialog sizing.  * Note that the width and height supplied define the content area  * So the actual physical dialog size will be larger to account for  * the chrome containing the header / footer etc.  * @param docId Faces component id of the document  * @param contentWidth - new content width you need  * @param contentHeight - new content height  */ function resizeDefaultMessageDialog(docId, contentWidth, contentHeight) {   // Warning this value may change from release to release   var defMDName = "::msgDlg";   //Find the default messages dialog   msgDialogComponent = AdfPage.PAGE.findComponentByAbsoluteId(docId + defMDName); // In your version add a check here to ensure we've found the right object!   // Check the new width is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentWidth)){       msgDialogComponent.setContentWidth(contentWidth);   }   // Check the new height is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentHeight)){       msgDialogComponent.setContentHeight(contentHeight);   } }  Function dimensionIsValid()  /**  * Simple function to check that sensible numeric values are   * being proposed for a dimension  * @param sampleDimension   * @return booolean  */ function dimensionIsValid(sampleDimension){     return (!isNaN(sampleDimension) && sampleDimension > 0); } Function  initializeDefaultMessageDialogSize() /**  * This function will re-define the default sizing applied by the framework   * in 11.1.2.n versions  * It is designed to be called with the document onLoad event  */ function initializeDefaultMessageDialogSize(loadEvent){   //get the configuration information   var documentId = loadEvent.getSource().getProperty('documentId');   var newWidth = loadEvent.getSource().getProperty('defaultMessageDialogContentWidth');   var newHeight = loadEvent.getSource().getProperty('defaultMessageDialogContentHeight');   resizeDefaultMessageDialog(documentId, newWidth, newHeight); } Wiring in the Functions As usual, the first thing we need to do when using JavaScript with ADF is to define an af:resource  in the document metaContainer facet <af:document>   ....     <f:facet name="metaContainer">     <af:resource type="javascript" source="/resources/js/hackMessagedDialog.js"/>    </f:facet> </af:document> This makes the script functions available to call.  Next if you want to use the option of defining an initial default size for the dialog you use a combination of <af:clientListener> and <af:clientAttribute> tags like this. <af:document title="MyApp" id="doc1">   <af:clientListener method="initializeDefaultMessageDialogSize" type="load"/>   <af:clientAttribute name="documentId" value="doc1"/>   <af:clientAttribute name="defaultMessageDialogContentWidth" value="400"/>   <af:clientAttribute name="defaultMessageDialogContentHeight" value="150"/>  ...   Just in Time Dialog Sizing  So  what happens if you have a variety of messages that you might add and in some cases you need a small dialog and an other cases a large one? Well in that case you can re-size these dialogs just before you submit the message. Here's some example Java code: FacesContext ctx = FacesContext.getCurrentInstance();          //reset the default dialog size for this message ExtendedRenderKitService service =              Service.getRenderKitService(ctx, ExtendedRenderKitService.class); service.addScript(ctx, "resizeDefaultMessageDialog('doc1',100,50);");          FacesMessage msg = new FacesMessage("Short message"); msg.setSeverity(FacesMessage.SEVERITY_ERROR); ctx.addMessage(null, msg);  So there you have it. This technique should, at least, allow you to control the dialog sizing just enough to stop really objectionable whitespace or scrollbars. 1 Don't worry if you don't get the reference, lest's just say my kids watch too many adverts.

    Read the article

  • WebFX: Running JavaFX as web page

    - by Bruno.Borges
    This weekend I wanted to learn JavaFX, so I decided to code an idea I had a few years ago when I first saw JavaFX Script. So I started coding a web browser that runs HTML with the awesome, HTML5 supported WebView. But this browser also offers one extra feature: it loads FXML files as if they were HTML. So instead of defining your web page with HTML and running with WebKit, you can define a web page with FXML+CSS+JS and run as a JavaFX application. The project is called WebFX and already has a prototype on GitHub. I also uploaded a video on YouTube demonstrating the idea. What do you think about using JavaFX in the future for web pages, instead of HTML?

    Read the article

  • JDK8 New Build Infrastructure

    - by kto
    I unintentionally posted this before I verified everything, so once I have verified it all works, I'll updated this post. But this is what should work... Most Interesting Builder in the World: "I don't always build the jdk, but when I do, I prefer The New JDK8 Build Infrastructure. Stay built, my friends." So the new Build Infrastructure changes have been integrated into the jdk8/build forest along side the older Makefiles (newer in makefiles/ and older ones in make/). The default is still the older makefiles. Instructions can be found in the Build-Infra Project User Guide. The Build-Infra project's goal is to create the fastest build possible and correct many of the build issues we have been carrying around for years. I cannot take credit for much of this work, and wish to recognize the people who do so much work on this (and will probably still do more), see the New Build Infrastructure Changeset for a list of these talented and hard working JDK engineers. A big "THANK YOU" from me. Of course, every OS and system is different, and the focus has been on Linux X64 to start, Ubuntu 11.10 X64 in particular. So there are at least a base set of system packages you need. On Ubuntu 11.10 X64, you should run the following after getting into a root permissions situation (e.g. have run "sudo bash"): apt-get install aptitude aptitude update aptitude install mercurial openjdk-7-jdk rpm ssh expect tcsh csh ksh gawk g++ build-essential lesstif2-dev Then get the jdk8/build sources: hg clone http://hg.openjdk.java.net/jdk8/build jdk8-build cd jdk8-build sh ./get_source.sh Then do your build: cd common/makefiles bash ../autoconf/configure make We still have lots to do, but this is a tremendous start. -kto

    Read the article

  • BeanInfo Editor in NetBeans Rocks

    - by Geertjan
    Impressed by a cool feature I didn't know about. If you have some JavaBean, like my Event class below, you can right-click it and choose "BeanInfo Editor": Now, as you can see above, I don't have a BeanInfo class. So I am now asked whether the IDE should create one for me. So I say OK and then I have a new BeanInfo class, generated from my Event class, as well as a multiview editor for visually editing the BeanInfo class: Thanks Eric and Nicklas from Artificial Solutions in Stockholm for pointing this out to me today. It comes in very handy in NetBeans Platform applications when you're working with a BeanNode and want to customize the display of your properties.

    Read the article

  • BI Publisher : Formatting Issues

    - by Manoj Madhusoodanan
    While creating BI Publisher reports the formatting issues are quite common.Here I am discussing some common issues related to BIP report development. 1) First issue is related to column formatting.When you want to display some data which has leading zeros or trailing zeros after '.' in EXCEL output you will not get the desired output.But in PDF it will come as what you are expecting.This is not with the issue of your data. This is due to the unique nature of EXCEL cell format.When you are trying to put a text data in a cell with out making any change to cell format it will treat as number and it will truncate all leading zeros and all trailing zeros after '.' . So what you have to do is to convert that data into a format which EXCEL can treat as text. Eg: If you want to display 0020100 convert this data into ="0020100". Same way for 23789.02300 to ="23789.02300".   Note: This is applicable to EXCEL output only.If you have multiple output type apply it only for EXCEL. 2) Second is related to report size issue in PDF output type.If the number of columns are more and if you want to show most of the columns in one row andif it is a PDF output you can choose the paper size as Legal (8.5 x 14''). You will get more spaces in the template to accommodate more columns. 3) If your XML data contains special characters like &,<,> etc ..  pass the data to DBMS_XMLGEN.CONVERT function.It will replace special characters with corresponding XML notations. Eg: (a>b) & (c!=d) to  (a&gt;b) &amp; (c!=d)

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >