Search Results

Search found 15212 results on 609 pages for 'andy oracle'.

Page 519/609 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • How to Open Any Folder as a Project in the NetBeans Platform

    - by Geertjan
    Typically, as described in the NetBeans Project Type Tutorial, you'll define a project type based on the presence of a file (e.g., "project.xml" or "customer.txt" or something like that) in a folder. I.e., if the file is there, then its parent, i.e., the folder that contains the file, is a project and should be opened in your application. However, in some scenarios (as with the HTML5 project type introduced in NetBeans IDE 7.3), the user should be able to open absolutely any folder at all into the application. How to create a project type that is that liberal? Here you go, the only condition that needs to be true is that the selected item in the "Open Project" dialog is a folder, as defined in the "isProject" method below. Nothing else. That's it. If you select a folder, it will be opened in your application, displaying absolutely everything as-is (since below there's no ProjectLogicalView defined): import java.beans.PropertyChangeListener; import java.io.IOException; import javax.swing.Icon; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.spi.project.ProjectFactory; import org.netbeans.spi.project.ProjectState; import org.openide.filesystems.FileObject; import org.openide.loaders.DataFolder; import org.openide.loaders.DataObjectNotFoundException; import org.openide.nodes.FilterNode; import org.openide.util.Exceptions; import org.openide.util.ImageUtilities; import org.openide.util.Lookup; import org.openide.util.lookup.Lookups; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = ProjectFactory.class) public class FolderProjectFactory implements ProjectFactory { @Override public boolean isProject(FileObject projectDirectory) { return DataFolder.findFolder(projectDirectory) != null; } @Override public Project loadProject(FileObject dir, ProjectState state) throws IOException { return isProject(dir) ? new FolderProject(dir) : null; } @Override public void saveProject(Project prjct) throws IOException, ClassCastException { // leave unimplemented for the moment } private class FolderProject implements Project { private final FileObject projectDir; private Lookup lkp; private FolderProject(FileObject dir) { this.projectDir = dir; } @Override public FileObject getProjectDirectory() { return projectDir; } @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ new Info(), }); } return lkp; } private final class Info implements ProjectInformation { @Override public Icon getIcon() { Icon icon = null; try { icon = ImageUtilities.image2Icon( new FilterNode(DataFolder.find( getProjectDirectory()).getNodeDelegate()).getIcon(1)); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return icon; } @Override public String getName() { return getProjectDirectory().getName(); } @Override public String getDisplayName() { return getName(); } @Override public void addPropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public void removePropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public Project getProject() { return FolderProject.this; } } } } Even the ProjectInformation implementation really isn't needed at all, since it provides nothing more than the icon in the "Open Project" dialog, the rest (i.e., the display name in the "Open Project" dialog) is provided by default regardless of whether you have a ProjectInformation implementation or not.

    Read the article

  • The Middle of Every Project

    - by andrew.sparks
    I read a quote somewhere “The middle of every successful project looks like a mess.” or something to that effect. I suppose the projects where the beginning, middle and end are a mess are the ones you need to watch out for. Right now we are in ramp up of the maintenance/support teams at a big project in the Nordics. We are facing a year of mixed mode operations, where we have production operations and the phased rollout to new locations in parallel. The support team supports, and the deployment team deploys. As usual the assumption right up to about a month or so before initial go-live was that the deployment team would carry the support. Not! Consequently we had a last minute scramble over the Christmas/New Year to fire up a support/maintenance team. While it is a bit messy and not perfect – the quality of the mess (I mean scramble) is not so bad. Weekly operational review with the operational delivery managers, written issue lists and assigned actions, candid discussions getting the problems on the table and documented, issues getting solved and moved off the table. So while the middle of a project might look like a mess (even the start) it is methodical use of project management tools of checklists and scheduled communication points that are helping us navigate out of the mess and bring it all under control.

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • Efficient inline templates and C++

    - by Darryl Gove
    I've talked before about calling inline templates from C++, I've also talked about calling inline templates efficiently. This time I want to talk about efficiently calling inline templates from C++. The obvious starting point is that I need to declare the inline templates as being extern "C": extern "C" { int mytemplate(int); } This enables us to call it, but the call may not be very efficient because the compiler will treat it as a function call, and may produce suboptimal code based on that premise. So we need to add the no_side_effect pragma: extern "C" { int mytemplate(int); #pragma no_side_effect(mytemplate) } However, this may still not produce optimal code. We've discussed how the no_side_effect pragma cannot be combined with exceptions, well we know that the code cannot produce exceptions, but the compiler doesn't know that. If we tell the compiler that information it may be able to produce even better code. We can do this by adding the "throw()" keyword to the template declaration: extern "C" { int mytemplate(int) throw(); #pragma no_side_effect(mytemplate) } The following is an example of how these changes might improve performance. We can take our previous example code and migrate it to C++, adding the use of a try...catch construct: #include <iostream extern "C" { int lzd(int); #pragma no_side_effect(lzd) } int a; int c=0; class myclass { int routine(); }; int myclass::routine() { try { for(a=0; a<1000; a++) { c=lzd(c); } } catch(...) { std::cout << "Something happened" << std::endl; } return 0; } Compiling this produces a slightly suboptimal code sequence in the hot loop: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 23 */ lzd %o0,%o0 /* 0x0018 21 */ add %l6,1,%l6 /* 0x001c */ cmp %l6,1000 /* 0x0020 */ bl,pt %icc,.L77000033 /* 0x0024 23 */ st %o0,[%l7] There's a store in the delay slot of the branch, so we're repeatedly storing data back to memory. If we change the function declaration to include "throw()", we get better code: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 21 */ add %i1,1,%i1 /* 0x0018 23 */ lzd %o0,%o0 /* 0x001c 21 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000019 /* 0x0024 */ nop The store has gone, but the code is still suboptimal - there's a nop in the delay slot rather than useful work. However, it's good enough for this example. The point I'm making is that the compiler produces the better code with both the "throw()" and the no side effect pragma.

    Read the article

  • OT: Fixing choppy video playback on OS X

    - by terrencebarr
    This is a bit off-topic but I wanted to share because it seems a lot of people are running into issues with choppy video playback and stutter on Mac OS X. I am using a Mac Mini with Snow Leopard (10.6.8) as a home media center and it has worked great in the past, playing back music and videos from multiple sources (web, quicktime, VLC, EyeTV). A few weeks ago the video playback from all my sources started to become choppy, to stutter, and often the picture would hang for seconds at a time. Totally unusable. Drove me nuts for two weeks. After much research and trial-and-error it turns out the problem was an outdated Flash Player which seems to have messed up the video pipeline for the entire system. The short is, I updated the Flash Player to version 11 directly from the Adobe web site, rebooted the Mac Mini, and all is well again! Judging from the various posts across the web, video playback appears to be a fairly widespread problem for Mac users and I hope this helps some of you out there! And I can’t wait to get rid of Flash altogether – I can’t remember the times it has crashed my browser, hung my system, and screwed up things. Thanks Adobe ;-( Cheers, – Terrence Filed under: Uncategorized Tagged: Adobe Flash, Mac OS X

    Read the article

  • Think Global, Act Regional with Identity Globe Trotters

    - by Tanu Sood
    This month we will be introducing a new section on our blog. Titled “Identity Globe Trotters”, this will be a monthly series that would feature a regional topic the last Friday of every month. We would invite guest contributors from different regions to highlight a region-specific business issue, solution, highlight a customer implementation or a regional discussion of interest. If you have an Identity management topic in mind that you’d like featured in this section, do let us know. We look forward to engaging in meaningful discussions with you on global perspectives, regional solutions.

    Read the article

  • Hot Off the Presses! Get Your Release of the October Procurement Newsletter!

    - by LuciaC
    Get all the recent news and featured topics for the Procurement modules including Purchasing, iProcurement, Sourcing and iSupplier. Find out what Procurement experts are recommending to prevent and resolve issues.  Important links are also included.  The October newsletter features articles on: The new Procurement Enhancement Request Community Procurement Community Development Corner Updated version of the PO Approvals Analyzer Uploading Files And there is much more….. Access the newsletter now: Doc ID 111111.1

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • CON6714 - Mixed-Language Development: Leveraging Native Code from Java

    - by Darryl Gove
    Here's the abstract from my JavaOne talk: There are some situations in which it is necessary to call native code (C/C++ compiled code) from Java applications. This session describes how to do this efficiently and how to performance-tune the resulting applications. The objectives for the session are: Explain reasons for using native code in Java applications Describe pitfalls of calling native code from Java Discuss performance-tuning of Java apps that use native code I'll cover how to call native code from Java, debugging native code, and then I'll dig into performance tuning the code. The talk is not going too deep on performance tuning - focusing on the JNI specific topics; I'll do a bit more about performance tuning in my OpenWorld talk later in the day.

    Read the article

  • Changes to File Store Provider in UCM PS3

    - by Kevin Smith
    In the recent PS3 release of UCM (11.1.1.4.0) there are some significant changes to the File Store Provider (FSP) configuration. For new PS3 installs (not upgrades from PS2) the FSP default storage rule includes a dispersion rule that will change the web-layout and vault paths by adding dispersion directories to the paths to limit the number of files in the vault and web-layout directories. What that means is that if you install a new PS3 UCM instance and migrate content in from a previous version of UCM the web URL will change. That is a critical problem for web sites and just general document management. See below for some details on the FSP configuration in PS3 and how you can change the default behavior. use the link below to read the rest of this post where I describe the issue in detaill and provide instructions for how to modify a PS3 instance to use the old format for the web-layout path.

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Learn more about SPARC by listening to our newly recorded podcasts

    - by Cinzia Mascanzoni
    Please listen to our newly recorded series of four podcasts focused on SPARC. The topics are: How SPARC T4 Servers Open New Opportunities SPARC Roadmap and SPARC T4 Architecture Highlights SPARC T4 For Installed Base Refresh and Consolidation SPARC T4 – How Does it Stack up Against the Competition? Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant, are your hosts. The intent is to continue to help you understand how to position and sell SPARC/T4 into your customer architecture.Details on how to access these podcasts can be found here.

    Read the article

  • A Virtual Dilemma

    - by antony.reynolds
    Solving a Gotcha with VirtualBox Guest Additions I was just building a new virtual machine based off an existing image that didn’t have the Virtual Box Guest Additions enabled.  The guest additions allow tight integration between the guest OS and the host environment, providing seemless mouse transfer and the ability to take advantage of full video screen size.  The guest additions need to be linked with the kernel which requires the kernel-devel package to be installed.  After installing this package and then trying to add the guest additions it failed, suggesting that I might not have the kernel-devel package that I had installed.  After a little though I finally realized what had happened.  When I grabbed the kernel-devel package I hadn’t checked the version of my kernel.  The kernel-devel I downloaded didn’t match the revision of the kernel I was running!  Hence my problems.  I upgraded the kernel to the same revision as my kernel-devel package and rebooted.  I had installed dkms so I was pleased to see that my VBox Additions successfully built and the mouse and screen now worked as expected. So now you know my embarrassing story for the day :-)

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   [Read More]

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   The folios themselves are actually XML files that contain the structure, attributes, and pointers to the content items.  So to publish this somewhere, such as a Site Studio page, you could perhaps use an XML parser to traverse the structure and create your output.  But XML parsers are not always the easiest or most efficient to use.  In order to more easily crawl and consume a Content Folio, Ed Bryant - Principal Sales Consultant, wrote a component to do just that.  His component adds a service which does all the work for you and returns the folio structure as a simple resultset.  So consuming and publishing that folio on a Site Studio page or in your portal using RIDC is a breeze!  For example, let's take an advanced Content Folio example like this: If we look at the native file, the XML looks like this: But if we access the folio using the new service - http://server/cs/idcplg?IdcService=FOLIO_CRAWL&dDocName=ecm008003&IsPageDebug=1 - this is what the result set looks like (using the IsPageDebug parameter). Given this as the result set, it makes it very easy to consume and repurpose that folio. You can download a copy of the sample component here. Special thanks to Ed for letting me share this component!

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • org.openide.awt.ColorComboBox

    - by Geertjan
    It's the time of year when a lot of NetBeans Platform tutorials are being reviewed, revised, and rewritten. Today I'm looking at the NetBeans Platform Paint Application Tutorial. Suddenly I remembered seeing something in a recent API Changes document about a new class, ColorComboBox. That means I can make the tutorial a lot simpler, since Tim Boudreau's external ColorChooser.jar is now superfluous. Here's what the ColorComboBox looks like: It works perfectly. Of course, the nice thing about using that JAR was that it showed the user how to incorporate external JARs, but I'll make sure to make a note of that in the tutorial, along the lines of "If you don't like the NetBeans Platform color combobox, and would like to replace it with your own, such as Tim's ColorChooser.jar or a JavaFX color chooser, take the following steps." In short, if you're using NetBeans APIs, write this on your ceiling above your bed: http://bits.netbeans.org/dev/javadoc/apichanges.html, check that page regularly (mark it in your calendar to do first thing every Monday morning) and you'll be aware of the latest changes as they happen.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Murali Papana Blogs About Date Effectivity

    - by steve.muench
    Murali Papana from our Human Capital Management (HCM) Fusion Applications team has posted a series of blogs on a lesser-known, but quite powerful feature of ADF called "date effectivity". This is a feature that allows the framework to simplify managing records whose data values are effective for a given period of time. Imagine an employee's job title or salary that changes over time, which as well might be entered today by an HR reprepsentative but go into effect at some time in the future. Check out these articles if you're curious to learn more: Learning basics of Date Effectivity in ADFADF Model: Creating Date Effective EOADF Model: Creating Date Effective Association and Date Effective VOADF UI - Implementing Date Effective Search with Example

    Read the article

  • JavaOne 2012 session slides: "Dev Berkeley DB & DB Mobile Server for Java Embedded Tech"

    - by hinkmond
    The latest JavaOne 2012 slides are available on the Web. Here's the presentation that Eric Jensen and I did on "Developing Berkeley DB & DB Mobile Server for Java Embedded Technology". Enjoy! See: Click here for the slides in a new window It was fun to present this talk at JavaOne 2012 with Eric. We had some good questions from the audience. Let me know in the Comments if you have any further questions. I'll pass all the good questions to Eric and keep the bad questions for myself. Hinkmond

    Read the article

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >