Search Results

Search found 19362 results on 775 pages for 'oracle supply chain planning'.

Page 525/775 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • The Social Business Thought Leaders - Esteban Kolsky

    - by kellsey.ruppel
    Esteban Kolsky's presentation at the Social Business Forum 2012 was meaningfully titled “Everything you wanted to know about Customer Service using Social but had no one to ask”.  A recent survey by ThinkJar, Kolsky’s independent analyst firm, reported how more than 90% of the interviewed companies consider embracing social channels in customer service the right thing to do for the business and its customers. These numbers shouldn't be too surprising given the popularity of services such as Twitter and Facebook (59% and 60% respectively in the survey) among organizations, the power consumers are gaining online and the 40% preference they have to escalate issues on social services. Moreover, both large enterprises and small businesses are realizing how customer retention is cheaper and easier than customer acquisition. Many companies are looking at communities and social networks as an opportunity to drive loyalty, satisfaction and word of mouth. However, in this early phase the way they are preparing to launch social support appears to be lacking at best: 66% have no defined processes for customer service over social channels 68% were not able to estimate ROI before deploying social in customer service Only 8% found the expected ROI Most of the projects are stuck in the pilot or testing phase In his interview for the Social Business Thought-Leaders, Esteban discusses how to turn social media hype in business gains by touching upon some of the hottest topics organizations face when approaching social support: How to go from social media monitoring to actionable insights How Social CRM should be best positioned in regard to traditional CRM The importance of integrating social data to transactional data  Conversations with customer service organizations points to 2012 as the year of "understanding what social means for supporting customers". Will 2013 be the year it all becomes reality? We invite you to listen to Esteban Kolsky's interview to understand how to most effectively develop cross-channel strategies that include social channels and improve both customer satisfaction and the overall customer experience.

    Read the article

  • Java Embedded @ JavaOne

    - by sasa
    2012?9?30??10?4?????????????????JavaOne????????????????????????????????Java?????????????????????????????JavaOne???10?3??4??????????????????????????·????????Java Embedded @ JavaOne??????????? JavaOne???????????JavaOne Embedded @ JavaOne??????????????????????????????????????Java SE Embedded?????????????????????????Java??????????????????????????? ????????????Java Embedded @ JavaOne??????9?7???$595?9?28???$795??????$995????????????????????JavaOne??????????????????100??????????

    Read the article

  • Developer Preview available for the Java Access Bridge is now included in Java SE 7 Update 6

    - by Ragini Prasad
    The Java Access Bridge product is now being included with Java SE 7u6. Manual installation of the Java Access Bridge will no longer be required. All Access Bridge files will be automatically installed by the JRE and the JDK.             The Developer Preview for this feature is now available and can be downloaded from http://jdk7.java.net/download.html.            By default, the Java Access Bridge is disabled. In order to use the Java Access Bridge, enable it using the steps mentioned below and test your applications for accessibility.             Enable the Java Access Bridge: Use one of these mechanism:             1. Ease Of Access control panel.     On Windows Vista and later the Java Access Bridge can be enabled     from Ease of Access Center.     Select "Use the computer without a display". In "Other programs     installed" section , select the check box to     "Enable Java Access Bridge" and apply. 2. Or run the following command in the Command Window.     %JRE_HOME%\bin\jabswitch -enable Note: You must restart your Assistive Technology software and Java application after enabling the bridge.             Test the Java Access Bridge: 1. Enable the Java Access Bridge as described above. 2. Run an Assistive Technology that supports the Java Access Bridge. 3. Run a Java application. Ensure that the Assistive Technology  reads    the values of your application. Disable the Java Access Bridge:             Run the following command from the Command Window.     %JRE_HOME%\bin\jabswitch -disable                 Note: The Ease Of Access control panel cannot be used to disable the bridge. You must use jabswitch from the Command window to disable the Java Access Bridge.

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • How to Tell a Hardware Problem From a Software Problem

    - by Chris Hoffman
    Your computer seems to be malfunctioning — it’s slow, programs are crashing or Windows may be blue-screening. Is your computer’s hardware failing, or does it have a software problem that you can fix on your own? This can actually be a bit tricky to figure out. Hardware problems and software problems can lead to the same symptoms — for example, frequent blue screens of death may be caused by either software or hardware problems. Computer is Slow We’ve all heard the stories — someone’s computer slows down over time because they install too much software that runs at startup or it becomes infected with malware. The person concludes that their computer is slowing down because it’s old, so they replace it. But they’re wrong. If a computer is slowing down, it has a software problem that can be fixed. Hardware problems shouldn’t cause your computer to slow down. There are some rare exceptions to this — perhaps your CPU is overheating and it’s downclocking itself, running slower to stay cooler — but most slowness is caused by software issues. Blue Screens Modern versions of Windows are much more stable than older versions of Windows. When used with reliable hardware with well-programmed drivers, a typical Windows computer shouldn’t blue-screen at all. If you are encountering frequent blue screens of death, there’s a good chance your computer’s hardware is failing. Blue screens could also be caused by badly programmed hardware drivers, however. If you just installed or upgraded hardware drivers and blue screens start, try uninstalling the drivers or using system restore — there may be something wrong with the drivers. If you haven’t done anything with your drivers recently and blue screens start, there’s a very good chance you have a hardware problem. Computer Won’t Boot If your computer won’t boot, you could have either a software problem or a hardware problem. Is Windows attempting to boot and failing part-way through the boot process, or does the computer no longer recognize its hard drive or not power on at all? Consult our guide to troubleshooting boot problems for more information. When Hardware Starts to Fail… Here are some common components that can fail and the problems their failures may cause: Hard Drive: If your hard drive starts failing, files on your hard drive may become corrupted. You may see long delays when you attempt to access files or save to the hard drive. Windows may stop booting entirely. CPU: A failing CPU may result in your computer not booting at all. If the CPU is overheating, your computer may blue-screen when it’s under load — for example, when you’re playing a demanding game or encoding video. RAM: Applications write data to your RAM and use it for short-term storage. If your RAM starts failing, an application may write data to part of the RAM, then later read it back and get an incorrect value. This can result in application crashes, blue screens, and file corruption. Graphics Card: Graphics card problems may result in graphical errors while rendering 3D content or even just while displaying your desktop. If the graphics card is overheating, it may crash your graphics driver or cause your computer to freeze while under load — for example, when playing demanding 3D games. Fans: If any of the fans fail in your computer, components may overheat and you may see the above CPU or graphics card problems. Your computer may also shut itself down abruptly so it doesn’t overheat any further and damage itself. Motherboard: Motherboard problems can be extremely tough to diagnose. You may see occasional blue screens or similar problems. Power Supply: A malfunctioning power supply is also tough to diagnose — it may deliver too much power to a component, damaging it and causing it to malfunction. If the power supply dies completely, your computer won’t power on and nothing will happen when you press the power button. Other common problems — for example, a computer slowing down — are likely to be software problems. It’s also possible that software problems can cause many of the above symptoms — malware that hooks deep into the Windows kernel can cause your computer to blue-screen, for example. The Only Way to Know For Sure We’ve tried to give you some idea of the difference between common software problems and hardware problems with the above examples. But it’s often tough to know for sure, and troubleshooting is usually a trial-and-error process. This is especially true if you have an intermittent problem, such as your computer blue-screening a few times a week. You can try scanning your computer for malware and running System Restore to restore your computer’s system software back to its previous working state, but these aren’t  guaranteed ways to fix software problems. The best way to determine whether the problem you have is a software or hardware one is to bite the bullet and restore your computer’s software back to its default state. That means reinstalling Windows or using the Refresh or reset feature on Windows 8. See whether the problem still persists after you restore its operating system to its default state. If you still see the same problem – for example, if your computer is blue-screening and continues to blue-screen after reinstalling Windows — you know you have a hardware problem and need to have your computer fixed or replaced. If the computer crashes or freezes while reinstalling Windows, you definitely have a hardware problem. Even this isn’t a completely perfect method — for example, you may reinstall Windows and install the same hardware drivers afterwards. If the hardware drivers are badly programmed, the blue-screens may continue. Blue screens of death aren’t as common on Windows these days — if you’re encountering them frequently, you likely have a hardware problem. Most blue screens you encounter will likely be caused by hardware issues. On the other hand, other common complaints like “my computer has slowed down” are easily fixable software problems. When in doubt, back up your files and reinstall Windows. Image Credit: Anders Sandberg on Flickr, comedy_nose on Flickr     

    Read the article

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Project Coin: JSR 334 has a Proposed Final Draft

    - by darcy
    Reaching nearly the last phase of the JCP process, JSR 334 now has a proposed final draft. There have been only a few refinements to the specification since public review: Incorporated language changes into JLS proper. Forbid combining diamond and explicit type arguments to a generic constructor. Removed unusual protocol around Throwable.addSuppressed(null) and added a new constructor to Throwable to allow suppression to be disabled. Added disclaimers that OutOfMemoryError, NullPointerException, and ArithmeticException objects created by the JVM may have suppression disabled. Added thread safely requirements to Throwable.addSuppressed and Throwable.getSuppressed. Next up is the final approval ballot; almost there!

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • What more a Business Service can do?

    - by Rajesh Sharma
    Business services can be accessed from outside the application via XAI inbound service, or from within the application via scripting, Java, or info zones. Below is an example to what you can do with a business service wrapping an info zone.   Generally, a business service is specific to a page service program which references a maintenance object, that means one business service = one service program = one maintenance object. There have been quite a few threads in the forum around this topic where the business service is misconstrued to perform services only on a single object, for e.g. only for CILCSVAP - SA Page Maintenance, CILCPRMP - Premise Page Maintenance, CILCACCP - Account Page Maintenance, etc.   So what do you do when you want to retrieve some "non-persistent" field or information associated with some object/entity? Consider few business requirements: ·         Retrieve all the field activities associated to an account. ·         Retrieve the last bill date for an account. ·         Retrieve next bill date for an account.   It can be as simple as described below, for this post, we'll use the first scenario - Retrieve all the field activities associated to an account. To achieve this we'll have to do the following:   Step 1: Define an info zone   (A basic Zone of type F1-DE-SINGLE - Info Data Explorer - Single SQL has been used; you can use F1-DE - Info Data Explorer - Multiple SQLs for more complex scenarios)   Parameter Description Value To Enter User Filter 1 F1 Initial Display Columns C1 C2 C3 SQL Condition F1 SQL Statement SELECT     FA_ID, FA_STATUS_FLG, CRE_DTTM FROM     CI_FA WHERE     SP_ID IN         (SELECT SP_ID         FROM CI_SA_SP         WHERE             SA_ID IN                 (SELECT SA_ID                  FROM CI_SA                  WHERE                     ACCT_ID = :F1)) Column 1 source=SQLCOL sqlcol=FA_ID Column 2 source=SQLCOL sqlcol=FA_STATUS_FLG Column 3 type=TIME source=SQLCOL sqlcol=CRE_DTTM order=DESC   Note: Zone code specified was 'CM_ACCTFA'   Step 2: Define a business service Create a business service linked to 'Service Name' FWLZDEXP - Data Explorer. Schema will look like this:   <schema> <zoneCd mapField="ZONE_CD" default="CM_ACCTFA"/>      <accountId mapField="F1_VALUE"/>      <rowCount mapField="ROW_CNT"/>      <result type="group">         <selectList type="list" mapList="DE">             <faId mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="1"/>                 </row>             </faId>              <status mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="2"/>                 </row>             </status>              <createdDateTime mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="3"/>                 </row>             </createdDateTime>         </selectList>     </result> </schema>      What's next? As mentioned above, you can invoke this business service from an outside application via XAI inbound service or call this business service from within a script.   Step 3: Create a XAI inbound service for above created business service         Step 4: Test the inbound service   Go to XAI Submission and test the newly created service   <RXS_AccountFA>       <accountId>5922116763</accountId> </RXS_AccountFA>  

    Read the article

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • Java Spotlight Episode 109: Pete Muir on CDI 1.1 @plmuir

    - by Roger Brinkley
    Interview with Pete Muir of Red Hat on CDI 1.1. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Getting started with Java Embedded Videos CDI 1.1 Public Review and Feedback Events Nov 20, JCP Public Meeting (see details below) Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewPete Muir is leading the CDI 1.1 specification, and work on JBoss Developer Framework, a set of tutorials and examples for all JBoss users. Previously, Pete has worked on Infinispan and I led the Seam and Weld projects, and is a founder of the Arquillian project. Pete has worked on a number of specifications including JSF 2.0, AtInject and Java EE 7. Pete is a regular speaker at JUGs and conferences such as JavaOne, Devoxx, JAX, JavaBlend, JSFDays, JBoss World, Red Hat Developer Day and JUDCon.Pete is currently employed by Red Hat Inc. working on JBoss open source projects. Before working for Red Hat, Pete used and contributed to Seam whilst working at a UK based staffing agency as IT Development Manager.

    Read the article

  • A tour of the GlassFish 3.1.2 DCOM support

    - by alexismp
    While we've mentioned the DCOM support in GlassFish 3.1.2 several times before, you'll probably find Byron's DCOM blog entry to be useful if you're using Windows as a deployment platform for your GlassFish cluster. Byron discusses how DCOM is used to communicate with remote Windows nodes participating in a GlassFish cluster, what Java libraries were used to wrap around DCOM, what new asadmin commands were addd (in particular validate-dcom) as well as some tips to make this all work on your specific environment. In addition to this blog post, you should considering reading the official product documentation : • Considerations for Using DCOM for Centralized Administration • Setting Up DCOM and Testing the DCOM Set Up

    Read the article

  • A Plea for Doug

    - by user12652314
    Doug was a key leader in the JCP and did all his research on sparc/solaris. That is until we changed the free patch policy support academics & research post CIC and he and many left in droves entirely pissed off. Well, we're working on a fix now so that all faculty can set-up a server environment, get free patch support and innovate on our stack from OS to virtualization to toolsets in support research, academic use and teaching. Hopefully, just maybe, we can start to bring Doug and the others back home as a result.

    Read the article

  • Java ME SDK 3.0.5 is released!

    - by SungmoonCho
      Java ME SDK 3.0.5 went live! For many months, we have been working hard to fix bugs from previous version, and add a lot of new features demanded by Java ME community. You can download the new version from this link. Please see below for more information. NetBeans Integration All Java ME tools are implemented as NetBeans plugins. Device Manager Java ME SDK now supports multiple device managers. You can switch between different versions of device managers. LWUIT 1.5 Support The Resource Editor is available from the Java ME menu to help you design and organize resources for LWUIT applications. For a description of LWUIT 1.5 features, visit the LWUIT download page Network Monitor Integrated with NetBeans profiling tools, the Network Monitor now supports WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. CPU Profiler Now uses standard NetBeans profiling facilities to view snapshots. Profiling of VM classes can also be toggled on or off. WURFL Device Database The database has been updated with more than 1000 new devices. Tracing - New tracing functionality now includes CLDC VM events, and monitors events such as exceptions, class loading, garbage collection, and methods invocation. New or updated JSR support - Includes support for JSR 234 (Advanced Multimedia Supplements), JSR 253 (Mobile Telephony API), JSR 257 (Contactless Communication API), JSR 258 (Mobile User Interface Customization API), and JSR 293 (XML API for Java ME).

    Read the article

  • eFX on NetBeans Platform at Silicon Valley JavaFX User Group

    - by Geertjan
    Below you can watch (in addition to seeing Steve Chin and Ben Evans) Sven Reimers presenting eFX, a JavaFX application framework on the NetBeans Platform, yesterday at the Silicon Valley JavaFX User Group. While watching, you'll learn quite a few things about the NetBeans Platform, at the same time. In the end, you see a VisualVM clone written in JavaFX on the NetBeans Platform. Sven will also talk on this topic at NetBeans Day and during his sessions at JavaOne.

    Read the article

  • Script to set NO_HEARTBEAT Flag

    - by Koppar
    The new plugin provides some flags to help debug the browser side VM. Below are the flags: JPI_PLUGIN2_DEBUG=1JPI_PLUGIN2_VERBOSE=1 The above 2 provide tracing information JPI_PLUGIN2_NO_HEARTBEAT = 1 This disables sending of  heartbeat messages between browser side VM and the client JVM instance(s). This lets the client JVM stay independent of browser side VM. These are to be set as system environment variables. Many a times we are required to set them using scripts. Here is a small script to achieve the same: -----------set_jpi_flags.vbs------------------------------------------ Set WSHShell = WScript.CreateObject("WScript.Shell")Set WshEnv = WshShell.Environment("USER")WshEnv("JPI_PLUGIN2_NO_HEARTBEAT") = "1"WshEnv("JPI_PLUGIN2_DEBUG") = "1"WshEnv("JPI_PLUGIN2_VERBOSE") = "1"WScript.Echo WshEnv("JPI_PLUGIN2_NO_HEARTBEAT")   ---- displays the value of the NO_HEARTBEAT var ---------------------------------------------------------------------------

    Read the article

  • Recent JSR Updates-JSR 356, 357, 355, 349, 236

    - by heathervc
    JSR 357, Social Media API, was not approved by the SE/EE EC to continue development in the JCP program. JSR 356, Java API for WebSocket, was approved by the SE/EE EC to continue development in the JCP program. JSR 355,  JCP Executive Committee Merge, published an Early Draft Review; this review closes 27 April.  You can read more about JSR 355 here. JSR 349, Bean Validation 1.1, published an Early Draft Review; this review closes 27 April. JSR 236,  Concurrency Utilities for Java EE, has updated the JSR page and moved to JCP version 2.8.

    Read the article

  • Java DB talks at JavaOne 2012

    - by kah
    It's soon time for JavaOne again in San Francisco, and Java DB is represented this year too. Dag Wanvik will give an introductory talk on Java DB on Tuesday, October 2 at 10:00: CON5141 - Java DB in JDK 7: A Free, Feature-Rich, Embeddable SQL Database Rick Hillegas and Noel Poore will discuss how to use Java DB on embedded devices in their talk on Thursday, October 4 at 14:00: CON6684 - Data Storage for Embedded Middleware Mark your calendars! :)

    Read the article

  • JMS : Specifying Message Paging Directory on Weblogic server.

    - by adejuanc
    Two ways to configure or modify Paging directory, here the examples : 1.- Via config.xml file. <paging-directory>C:\temp</paging-directory> <jms-server> <name>JMSServerMS1</name> <target>MS1</target> <persistent-store xsi:nil="true"></persistent-store> <hosting-temporary-destinations>true</hosting-temporary-destinations> <temporary-template-resource xsi:nil="true"></temporary-template-resource> <temporary-template-name xsi:nil="true"></temporary-template-name> <message-buffer-size>-1</message-buffer-size> <paging-directory>C:\temp</paging-directory> <paging-file-locking-enabled>true</paging-file-locking-enabled> <expiration-scan-interval>30</expiration-scan-interval> </jms-server> ------------------------------------------------------- 2 .- Via WLST (Weblogic scripting tool) startEdit() cd('/Deployments/JMSServerMS1') cmo.setPagingDirectory('C:\\temp') activate()

    Read the article

  • Friday Fun: Heru

    - by Asian Angel
    This week’s game will test your ability to aim accurately, work quickly, and keep focused all at the same time. Just eliminate the entire chain marbles before it reaches the end of the track, but is it as easy to do as it sounds? There is only one way to find out! HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >