Search Results

Search found 1918 results on 77 pages for 'partner enablement'.

Page 46/77 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • OVERVIEW ORACLE SALES PLAYS

    - by michaela.seika(at)oracle.com
    As an EMEA VAD partner, please update your knowledge on Oracle's Hardware and Software Solutions. Please join us at one of the following WebConferences and sent us a short mail for your registration: Tuesday, 15. February 2011 Sales Play 1: Overview of the High Impact Sales Plays - SALES Thursday, 17. February 2011 Sales Play 2: High Impact Sales Plays - TECHNICAL Further information: Database Application Acceleration with Flash Storage  Oracle's Sun Hardware Solutions

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • Oracle HCM Cloud Customer Q&A with WAXIE Sanitary Supply

    - by HCM-Oracle
    At this year’s Oracle HCM User Group (OHUG) Global conference, we had the opportunity to sit down with Oracle HCM Cloud customers for a short Q&A. We got to hear about what brought them to the OHUG conference, some of the benefits they are receiving from their Oracle HCM Cloud solutions, and advice they would give other businesses looking to move to the cloud.  Below is a discussion we had with Melissa Halverson, Benefits & HRIS Manager at WAXIE Sanitary Supply.  Q: What made you attend the OHUG Global Conference this year? Halverson: The biggest reason is networking. It allows me to connect with others in the Oracle HCM Cloud community. I was able to speak at the HCM Cloud SIG (Special Interest Group) on the first day and share my experiences as well as hear the experiences of other Oracle HCM Cloud users. It also allows me to get face-time with key people within Oracle.  Q: What Oracle HCM solutions are you currently using? Halverson: Global HR, Benefits, Workforce Compensation, and Performance Management. Q: Do you plan to invest further in Oracle HCM? Halverson: Yes, we are interested in Time and Labor. We would also like to get Recruiting at some point in the future. Q: What would you say is the most significant benefit you’ve realized from your use of Oracle HCM solutions? Halverson: First and foremost would be process improvement. Before we had Oracle HCM Cloud we relied on a paper process where something as simple as an employee address change required changes to be made manually in 9 different systems. Obviously that was extremely inefficient, but also increased the likelihood of errors being made.  The other huge benefit we have seen was in making information visible to the people that need it. Prior to implementing Oracle HCM Cloud, it was very difficult for anyone to access and make use of the information in our systems. Now, we can provide this information to those who need it to make better decisions.  Q: What advice would you give an organization looking to move their HR systems to the cloud? Halverson: One thing I think many organizations don't spend enough time doing is thoroughly vetting their implementation partner. I believe you should be vetting your implementation partner as much as you did the system itself. Also, manpower is so important. Involve as large a team as possible because you don’t want to get stuck having too few bodies to help out. And set realistic time frames. Biting off more than you can chew will inevitably result in failure. Having a phased approach is always best rather than trying to do everything at once. Thanks for the tips Melissa. Enjoy the rest of the conference!

    Read the article

  • New Virtual Compute Appliance Videos

    - by Cinzia Mascanzoni
    Watch the latest Virtual Compute Appliance videos to aid your conversations with partners and customers! Virtual Compute Appliance Flash demo shows your customers and partners the business benefits. VCA Product demo. Tier1 Customer Testimonial Video of using Oracle's Virtual Compute Appliance to build a private cloud virtualization platform to host its customers’ Oracle Enterprise and Windows applications. Centroid Partner Testimonial Video.

    Read the article

  • Idera Releases SQL Diagnostic Manager v7.1

    Idera recently beefed up its portfolio of SQL Server management and administration tools with the release of SQL diagnostic manager 7.1. Idera has enhanced SQL diagnostic manager's already impressive set of features in version 7.1 with new additions that should appeal to database administrators. The release is another example of Idera's dedication to growing its portfolio of SQL Server solutions that has allowed the Microsoft Managed Partner to expand its client base to over 10,000 customers worldwide. The highlights of SQL diagnostic manager 7.1's new features begin with an impressive Serve...

    Read the article

  • Let Oracle University help you become a Certified Implementation Specialist

    - by user12875760
    Oracle recognizes partner competency skills and commitment through the new Oracle PartnerNetwork Specialized program and offers a variety of accreditations that count towards OPN Certification. Be Recognized! Validate your knowledge and get the credit you deserve by passing the Specialist exams offered across Oracle's portfolio of products and solutions. Pass the exam(s) and get your OPN Specialist Certificate! Read more by clicking here

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • ITT Corporation Goes Live on Oracle Sales and Marketing Cloud Service (Fusion CRM)!

    - by Richard Lefebvre
    Back in Q2 of FY12, a division of ITT invited Oracle to demo our CRM On Demand product while the group was considering Salesforce.com. Chris Porter, our Oracle Direct sales representative learned the players and their needs and began to develop relationships. We lost that deal, but not Chris's persistence. A few months passed and Chris called on the ITT Shape Cutting Division's Director of Sales to see how things were going. Chris was told that the plan was for the division to buy more Salesforce.com. In fact, he informed Chris that he had just sent his team to Salesforce.com training. During the conversation, Chris mentioned that our new Oracle Sales Cloud Service could run with Outlook. This caused the ITT Sales Director to reconsider the plan to move forward with our competition. Oracle was invited back to demo the Oracle Sales and Marketing Cloud Service (Fusion CRM) and after it concluded, the Director stated, "That just blew your competition away." The deal closed on June 5th , 2012 Our Oracle Platinum Partner, Intelenex, began the implementation with ITT on July 30th. We are happy to report that on September 18th, the ITT Shape Cutting Division successfully went live on Oracle Sales and Marketing Cloud Service (Fusion CRM). About: ITT is a diversified leading manufacturer of highly engineered critical components and customized technology solutions for growing industrial end-markets in energy infrastructure, electronics, aerospace and transportation. Building on its heritage of innovation, ITT partners with its customers to deliver enduring solutions to the key industries that underpin our modern way of life. Founded in 1920, ITT is headquartered in White Plains, NY, with 8,500 employees in more than 30 countries and sales in more than 125 countries. The ITT Shape Cutting Division provides plasma lasers and controls with the Burny, Kaliburn, and AMC brands. Oracle Fusion Products: Oracle Sales and Marketing Cloud Service (Fusion CRM) including: • Fusion CRM Base • Fusion Sales Cloud • Fusion Mobile and Desktop Integration • Automated Forecasting Adoption Model: SaaS Partner: Intelenex Business Drivers: The ITT Shape Cutting Division wanted to: better enable its Sales Force with email and mobile CRM capabilities simplify and automate its complex sales processes centrally manage and maintain customer contact information Why We Won: ITT was impressed with the feature-rich capabilities of Oracle Sales and Marketing Cloud Service (Fusion CRM), including sales performance management and integration. The company also liked the product's flexibility and scalability for future growth. Expected Benefits: Streamlined accurate forecasting Increased customer manageability Improved sales performance Better visibility to customer information

    Read the article

  • Lifecycle Technology Delivers AutoVue Visualization Integration for SAP

    Lifecycle Technology is an Oracle development partner and has built a Connector for Oracle's AutoVue visualization solution and SAP. Their area of expertise lies in integrating AutoVue visualization and printing solutions with SAP business processes within Asset Lifecycle Management,Product Lifecycle Management,and Document Management Systems. Lifecycle visually enables a variety of SAP workflows and processes in manufacturing,plant maintenance,and production. Their solutions allow SAP enterprise customers to view technical or engineering documents in the appropriate business context and/or print them as required in their workflows, improving productivity and decision making.

    Read the article

  • How do I remove this duplicate sources.list entry?

    - by blade19899
    Sometimes, when I run sudo apt-get update or install an app sudo apt-get install I get the error below at the bottom in the gnome-terminal and I can't remove it. I searched for it in the software-sources and in y-ppa-manager and I am not sure I want to mess with the sources list since I am a bit of a noob. Duplicate sources.list entry http://archive.canonical.com/ubuntu/ oneiric/partner amd64 Packages \ (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_oneiric_partner_binary-amd64_Packages)

    Read the article

  • eSTEP Newsletter for October 2013 now available

    - by uwes
    Dear Partners,We would like to let you know that the October'13 issue of our Newsletter is now available.The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan VosYou can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • How a Portal Factory Simplifies Development

    Does your organization have a variety of ways to develop and maintain customer, partner, and employee websites? Perhaps you should consider how a portal development factory can simplify development through an efficient and consistent set of people, processes, and platforms.

    Read the article

  • How a Portal Factory Simplifies Development

    Does your organization have a variety of ways to develop and maintain customer, partner, and employee websites? Perhaps you should consider how a portal development factory can simplify development through an efficient and consistent set of people, processes, and platforms.

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • Google adsense - providing access (via an additional account?) to a third party

    - by Homunculus Reticulli
    I am working with a partner who will be handling the marketing side of things for one of my websites. He has informed me that he will require access to my adsense account. I need to create an additional account for him, so that he can access and manage Google Adwords/units etc, using his own login credentials. However, despite searching Google for a while now, I can't seem to locate any information that pertains to creating additional user accounts. Does anyone know how I may do this?

    Read the article

  • Annotation Processor for Superclass Sensitive Actions

    - by Geertjan
    Someone creating superclass sensitive actions should need to specify only the following things: The condition under which the popup menu item should be available, i.e., the condition under which the action is relevant. And, for superclass sensitive actions, the condition is the name of a superclass. I.e., if I'm creating an action that should only be invokable if the class implements "org.openide.windows.TopComponent",  then that fully qualified name is the condition. The position in the list of Java class popup menus where the new menu item should be found, relative to the existing menu items. The display name. The path to the action folder where the new action is registered in the Central Registry. The code that should be executed when the action is invoked. In other words, the code for the enablement (which, in this case, means the visibility of the popup menu item when you right-click on the Java class) should be handled generically, under the hood, and not every time all over again in each action that needs this special kind of enablement. So, here's the usage of my newly created @SuperclassBasedActionAnnotation, where you should note that the DataObject must be in the Lookup, since the action will only be available to be invoked when you right-click on a Java source file (i.e., text/x-java) in an explorer view: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation; import org.openide.awt.StatusDisplayer; import org.openide.loaders.DataObject; import org.openide.util.NbBundle; import org.openide.util.Utilities; @SuperclassBasedActionAnnotation( position=30, displayName="#CTL_BrandTopComponentAction", path="File", type="org.openide.windows.TopComponent") @NbBundle.Messages("CTL_BrandTopComponentAction=Brand") public class BrandTopComponentAction implements ActionListener { private final DataObject context; public BrandTopComponentAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { String message = context.getPrimaryFile().getPath(); StatusDisplayer.getDefault().setStatusText(message); } } That implies I've created (in a separate module to where it is used) a new annotation. Here's the definition: package org.netbeans.sbas.annotations; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Retention(RetentionPolicy.SOURCE) @Target(ElementType.TYPE) public @interface SuperclassBasedActionAnnotation { String type(); String path(); int position(); String displayName(); } And here's the processor: package org.netbeans.sbas.annotations; import java.util.Set; import javax.annotation.processing.Processor; import javax.annotation.processing.RoundEnvironment; import javax.annotation.processing.SupportedAnnotationTypes; import javax.annotation.processing.SupportedSourceVersion; import javax.lang.model.SourceVersion; import javax.lang.model.element.Element; import javax.lang.model.element.TypeElement; import javax.lang.model.util.Elements; import org.openide.filesystems.annotations.LayerBuilder.File; import org.openide.filesystems.annotations.LayerGeneratingProcessor; import org.openide.filesystems.annotations.LayerGenerationException; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = Processor.class) @SupportedAnnotationTypes("org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation") @SupportedSourceVersion(SourceVersion.RELEASE_6) public class SuperclassBasedActionProcessor extends LayerGeneratingProcessor { @Override protected boolean handleProcess(Set annotations, RoundEnvironment roundEnv) throws LayerGenerationException { Elements elements = processingEnv.getElementUtils(); for (Element e : roundEnv.getElementsAnnotatedWith(SuperclassBasedActionAnnotation.class)) { TypeElement clazz = (TypeElement) e; SuperclassBasedActionAnnotation mpm = clazz.getAnnotation(SuperclassBasedActionAnnotation.class); String teName = elements.getBinaryName(clazz).toString(); String originalFile = "Actions/" + mpm.path() + "/" + teName.replace('.', '-') + ".instance"; File actionFile = layer(e).file( originalFile). bundlevalue("displayName", mpm.displayName()). methodvalue("instanceCreate", "org.netbeans.sbas.annotations.SuperclassSensitiveAction", "create"). stringvalue("type", mpm.type()). newvalue("delegate", teName); actionFile.write(); File javaPopupFile = layer(e).file( "Loaders/text/x-java/Actions/" + teName.replace('.', '-') + ".shadow"). stringvalue("originalFile", originalFile). intvalue("position", mpm.position()); javaPopupFile.write(); } return true; } } The "SuperclassSensitiveAction" referred to in the code above is unchanged from how I had it in yesterday's blog entry. When I build the module containing two action listeners that use my new annotation, the generated layer file looks as follows, which is identical to the layer file entries I hard coded yesterday: <folder name="Actions"> <folder name="File"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr name="displayName" stringvalue="Process Action Listener"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.instance"> <attr bundlevalue="org.netbeans.sbas.impl.Bundle#CTL_BrandTopComponentAction" name="displayName"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.BrandTopComponentAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="10" name="position"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-BrandTopComponentAction.instance"/> <attr intvalue="30" name="position"/> </file> </folder> </folder> </folder> </folder>

    Read the article

  • Prop Up Your Commerce With the Organic SEO Services

    When you are about to appoint a professional company for SEO services, you should always prefer a business provider who will be of your business level and share comparable competence as your business. This is because having a business partner with the identical caliber will permit the two organizations to work reciprocally with each other with the matching commitment levels and high work ethics.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Mastering Online Outsourcing

    Two things that have really helped me with my online marketing over the years are having an accountability partner and outsourcing. Also learning how well a solution to some problem will work out when the size of the problem increases. Most people think, I'll develop my business when it is time to develop it. The thing is developing over time can be the downfall of a business if not watched carefully.

    Read the article

  • UPMC Picks Oracle Identity Management

    - by Naresh Persaud
    UPMC, a $10-billion integrated global health enterprise, has selected Oracle as a key technology partner in UPMC’s $100-million analytics initiative designed to help “unlock the secrets of human health and disease” by consolidating and analyzing data from 200 separate sources across UPMC’s far-flung network.As part of the project UPMC also selected Oracle Identity Management to secure the interaction and insure regulatory compliance. Read complete article here. As healthcare organizations create new services on-line to provide better care Identity Management can provide a foundation for collaboration.

    Read the article

  • Upgrading to Oracle Enterprise Performance Management Version 11.1.2

    Oracle Enterprise Performance Management Version 11.1.2 offers many great new features for customers upgrading from previous versions of their Hyperion applications. This webcast discusses the benefits of these new features plus the best way to go about planning and implementing the upgrade. AMOSCA, an Oracle Platinum Partner has already completed 15 Oracle EPM upgrades to Version 11.1.x with its customers. Noel Gorvett, Managing Director of AMOSCA shares their experiences of these upgrades to help customers currently considering upgrading to make the best decisions when planning and implementing their upgrade.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >