Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 141/679 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • How-to tell the ViewCriteria a user chose in an af:query component

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The af:query component defines a search form for application users to enter search conditions for a selected View Criteria. A View Criteria is a named where clauses that you can create declaratively on the ADF Business Component View Object. A default View Criteria that allows users to search in all attributes exists by default and exposed in the Data Controls panel. To create an ADF Faces search form, expand the View Object node that contains the View Criteria definition in the Data Controls panel. Drag the View Criteria that should be displayed as the default criteria onto the page and choose Query in the opened context menu. One of the options within the Query option is to create an ADF Query Panel with Table, which displays the result set in a table view, which can have additional column filters defined. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} To intercept the user query for modification, or just to know about the selected View Criteria, you override the QueryListener property on the af:query component of the af:table component. Overriding the QueryListener on the table makes sense if the table allows users to further filter the result set using column filters.To override the default QueryListener, copy the existing string referencing the binding layer to the clipboard and then select Edit from the field context menu (press the arrow icon to open it) to selecte or create a new managed bean and method to handle the query event.  The code below is from a managed bean with custom query listener handlers defined for the af:query component and the af:table component. The default listener entry copied to the clipboard was "#{bindings.ImplicitViewCriteriaQuery.processQuery}"  public void onQueryList(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}        QueryDescriptor qdes = queryEvent.getDescriptor();          //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());           //call default Query Event        invokeQueryEventMethodExpression("      #{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent);  } public void onQueryTable(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}   QueryDescriptor qdes = queryEvent.getDescriptor();   //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());                   invokeQueryEventMethodExpression(     "#{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent); } private void invokeQueryEventMethodExpression(                        String expression, QueryEvent queryEvent){   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory   fctx.getApplication().getExpressionFactory();     MethodExpression me =     efactory.createMethodExpression(elctx,expression,                                     Object.class,                                     new Class[]{QueryEvent.class});     me.invoke(elctx, new Object[]{queryEvent}); } Of course, this code also can be used as a starting point for other query manipulations and also works with saved custom criterias. To read more about the af:query component, see: http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12419/tagdoc/af_query.html

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Frank Buytendijk on Prahalad, Business Best Practices

    - by Bob Rhubart
      In his video on the questionable value of some business best practices, Frank Buytendijk mentions a recent HBR article by business guru C.K. Prahalad. I just learned that Prahalad passed away this past weekend at the age of 68. (Information Week obit) A couple of years ago I had the good fortune to attend Mr. Prahalad’s keynote address at a Gartner event.  He had an audience of software architects absolutely mesmerized as he discussed technology’s role in the changing nature of business competition.  The often dysfunctional relationship between IT and business has and will probably always be hot-button issue. But during Prahalad’s keynote,  there was a palpable sense that the largely technical audience was having some kind of breakthrough, that they had achieved a new level of understanding about the importance of the relationship between the two camps. Fortunately, Prahalad leaves behind a significant body of work that will remain a valuable resource as business and the technology that supports it continues to evolve. Technorati Tags: business best practices,enterprise architecture,prahalad,oracle del.icio.us Tags: business best practices,enterprise architecture,prahalad,oracle

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 23-29, 2013

    - by Bob Rhubart
    2,947 people now follow OTN ArchBeat on Facebook. Here are the Top 10 items shared on that page for June 23-29, 2013. Podcast Show Notes: DevOps, Cloud, and Role Creep After some confusion (my bad) all three CORRECT parts of this podcast are now available. The panelists for this discussion are all Oracle ACE Directors: Ron Batra, Basheer Khan, and Cary Millsap. SOA Suite 11g Developers Cookbook Published | Antony Reynolds "The book focuses on areas that we felt we had neglected in the Developers Guide, says co-author Antony Reynolds. "There is more about Java integration and OSB, both of which we see a lot of questions about when working with customers." Using Oracle TimesTen With Oracle BI Applications (Part 2) | Peter Scott Peter Scott follows up an earlier post with a look at some of the OBIA structures and a discussion of some of the features of TimesTen. Linux-Containers — Part 1: Overview | Lenz Grimmer OTN Garage blogger Lenz Grimmer kicks off a series and expands your mind with deep detail on Linux Containers Slides from my ODTUG Kscope13 Presentation | Zeeshan Baig Oracle ACE Zeeshan Baig shares the slides from his KScope13 presentation, "Build Your Business Services Using ADF Task Flows." Fun with Enterprise Manager | Rene van Wijk Oracle ACE Rene van Wijk shares some background and some tuning and other tech tips for working with Oracle Enterprise Manager. Using VirtualBox to test drive Windows Blue | The Fat Bloke The Fat Bloke shares a tech tip for those interested in giving Windows Blue a try on Virtual Box. Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture In this three-part series Oracle Fusion Middleware A-Team members Jennifer Briscoe, Clifford Musante, Mikael Ottosson, and Pardha Reddy talk about the origins and mission of the FMW A-Team and about the great technical content you'll find on the recently launched Oracle A-Team blog. Part one is now available. 5 Best Practices - Laying the Foundation for WebCenter Projects | John Brunswick Oracle WebCenter expert John Brunswick shares best practices that "enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration." Oracle Magazine - July/Aug 2013 The digital edition of the July/August edition of Oracle Magazine is now available. This issue includes my architect community column, "The CX Factor." which features insight from community members on "why and how CX has become a significant factor in enterprise IT." h

    Read the article

  • SQL Server Split() Function

    - by HighAltitudeCoder
    Title goes here   Ever wanted a dbo.Split() function, but not had the time to debug it completely?  Let me guess - you are probably working on a stored procedure with 50 or more parameters; two or three of them are parameters of differing types, while the other 47 or so all of the same type (id1, id2, id3, id4, id5...).  Worse, you've found several other similar stored procedures with the ONLY DIFFERENCE being the number of like parameters taped to the end of the parameter list. If this is the situation you find yourself in now, you may be wondering, "why am I working with three different copies of what is basically the same stored procedure, and why am I having to maintain changes in three different places?  Can't I have one stored procedure that accomplishes the job of all three? My answer to you: YES!  Here is the Split() function I've created.    /******************************************************************************                                       Split.sql   ******************************************************************************/ /******************************************************************************   Split a delimited string into sub-components and return them as a table.   Parameter 1: Input string which is to be split into parts. Parameter 2: Delimiter which determines the split points in input string. Works with space or spaces as delimiter. Split() is apostrophe-safe.   SYNTAX: SELECT * FROM Split('Dvorak,Debussy,Chopin,Holst', ',') SELECT * FROM Split('Denver|Seattle|San Diego|New York', '|') SELECT * FROM Split('Denver is the super-awesomest city of them all.', ' ')   ******************************************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE xtype = 'TF'       AND name = 'Split'       ) BEGIN       DROP FUNCTION Split END GO   CREATE FUNCTION Split (       @InputString                  VARCHAR(8000),       @Delimiter                    VARCHAR(50) )   RETURNS @Items TABLE (       Item                          VARCHAR(8000) )   AS BEGIN       IF @Delimiter = ' '       BEGIN             SET @Delimiter = ','             SET @InputString = REPLACE(@InputString, ' ', @Delimiter)       END         IF (@Delimiter IS NULL OR @Delimiter = '')             SET @Delimiter = ','   --INSERT INTO @Items VALUES (@Delimiter) -- Diagnostic --INSERT INTO @Items VALUES (@InputString) -- Diagnostic         DECLARE @Item                 VARCHAR(8000)       DECLARE @ItemList       VARCHAR(8000)       DECLARE @DelimIndex     INT         SET @ItemList = @InputString       SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       WHILE (@DelimIndex != 0)       BEGIN             SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)             INSERT INTO @Items VALUES (@Item)               -- Set @ItemList = @ItemList minus one less item             SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)             SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       END -- End WHILE         IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString       BEGIN             SET @Item = @ItemList             INSERT INTO @Items VALUES (@Item)       END         -- No delimiters were encountered in @InputString, so just return @InputString       ELSE INSERT INTO @Items VALUES (@InputString)         RETURN   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON Split TO UserRole1 --GRANT SELECT ON Split TO UserRole2 --GO   The syntax is basically as follows: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C AND TABLE2.Id IN (SELECT * FROM Split(@IdList, ',')) @IdList is a parameter passed into the stored procedure, and the comma (',') is the delimiter you have chosen to split the parameter list on. You can also use it like this: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C HAVING COUNT(SELECT * FROM Split(@IdList, ',') Similarly, it can be used in other aggregate functions at run-time: SELECT MIN(SELECT * FROM Split(@IdList, ','), <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C GROUP BY <fields> Now that I've (hopefully effectively) explained the benefits to using this function and implementing it in one or more of your database objects, let me warn you of a caveat that you are likely to encounter.  You may have a team member who waits until the right moment to ask you a pointed question: "Doesn't this function just do the same thing as using the IN function?  Why didn't you just use that instead?  In other words, why bother with this function?" What's happening is, one or more team members has failed to understand the reason for implementing this kind of function in the first place.  (Note: this is THE MOST IMPORTANT ASPECT OF THIS POST). Allow me to outline a few pros to implementing this function, so you may effectively parry this question.  Touche. 1) Code consolidation.  You don't have to maintain what is basically the same code and logic, but with varying numbers of the same parameter in several SQL objects.  I'm not going to go into the cons related to using this function, because the afore mentioned team member is probably more than adept at pointing these out.  Remember, the real positive contribution is ou are decreasing the liklihood that your team fails to update all (x) duplicate copies of what are basically the same stored procedure, and so on...  This is the classic downside to duplicate code.  It is a virus, and you should kill it. You might be better off rejecting your team member's question, and responding with your own: "Would you rather maintain the same logic in multiple different stored procedures, and hope that the team doesn't forget to always update all of them at the same time?".  In his head, he might be thinking "yes, I would like to maintain several different copies of the same stored procedure", although you probably will not get such a direct response.  2) Added flexibility - you can use the Split function elsewhere, and for splitting your data in different ways.  Plus, you can use any kind of delimiter you wish.  How can you know today the ways in which you might want to examine your data tomorrow?  Segue to my next point. 3) Because the function takes a delimiter parameter, you can split the data in any number of ways.  This greatly increases the utility of such a function and enables your team to work with the data in a variety of different ways in the future.  You can split on a single char, symbol, word, or group of words.  You can split on spaces.  (The list goes on... test it out). Finally, you can dynamically define the behavior of a stored procedure (or other SQL object) at run time, through the use of this function.  Rather than have several objects that accomplish almost the same thing, why not have only one instead?

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Silverlight Cream for April 06, 2010 -- #832

    - by Dave Campbell
    In this Issue: Alex van Beek, Gill Cleeren, SilverlightShow, Michael Sync, Rénald Nollet, Charles Petzold, The-Oliver, and Max Paulousky. Shoutouts: Denislav Savkov of SilverlightShow ported his Slider control to WP7: Windows Phone 7 Series Sample Image Viewer SilverlightShow interview: The Silverlight Tour - what, where and why. Interview with one of the Tour organizers Laurent Duveau From SilverlightCream.com: Silverlight 4: using the VisualStateManager for state animations with MVVM Alex van Beek has an approach to resolving the MVVM issue of Animations without keeping a reference to the ViewModel by way of VisualStateManager Leveraging the ASP.NET Membership in Silverlight Gill Cleeren's post at SilverlightShow talks about using ASP.NET authentication inside your Silverlight making membership not only something you know and understand, but now the transition from your ASP.NET apps to Silverlight is simple as well. Windows Phone 7 Series RSS reader SilverlightShow has a demo RSS Reader for WP7 up... no text, but the code is there. Step by Step Tutorial : Installing Multi-Touch Simulator for Silverlight Phone 7 Michael Sync actually has a multi-touch simulator working for WP7 ... it involves a bunch of moving parts and one of the requirements is Windows 7, but if that works for you, this will too :) Element Property Binding Improvements in Blend 4 Beta and Visual Studio 2010 RC Rénald Nollet demonstrates new Blend and VS2010 features that assists you in Element Property binding with real examples. Projection Transforms Sans Math Charles Petzold is writing about Silverlight and 3D and specifically in this post 3D without math which becomes PlaneProjection... good long tutorial on it and code to back it all up. Daily Demo: Silverlight Install out of browser & Check for Update Behaviors The-Oliver has a post up about OOB and checking for updates using behaviors with only a slight change to your xaml... cool! Wizards. Prototype of sketching Wizard for WPF – 2 Max Paulousky has part 2 of his tutorial on a sketchflow Wizard for WPF ... yes WPF, but check it out... source too. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Solving Null Entity Problems with JPA Data Controls in PS1

    - by shay.shmeltzer
    Turns out there is a slight bug that seems to prevent you from doing interactions (update, scroll) with the results of a JPA named query that you dropped on a page using ADF Binding. People are running into this when they are doing the EJB tutorial on OTN for example. The problem is that the way the binding is set up for you automatically doesn't allow you to actually access the iterator set of records to do follow up operations. When I last checked this was solved in the next release of JDeveloper, but in the meantime there is a quick simple way to resolve the issue by changing the refresh condition of the oiterator in your page binding. Here is a little demo that shows the problem and the solution:

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • C#/.NET Little Wonders: The Predicate, Comparison, and Converter Generic Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last three weeks, we examined the Action family of delegates (and delegates in general), the Func family of delegates, and the EventHandler family of delegates and how they can be used to support generic, reusable algorithms and classes. This week I will be completing my series on the generic delegates in the .NET Framework with a discussion of three more, somewhat less used, generic delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>. These are older generic delegates that were introduced in .NET 2.0, mostly for use in the Array and List<T> classes.  Though older, it’s good to have an understanding of them and their intended purpose.  In addition, you can feel free to use them yourself, though obviously you can also use the equivalents from the Func family of delegates instead. Predicate<T> – delegate for determining matches The Predicate<T> delegate was a very early delegate developed in the .NET 2.0 Framework to determine if an item was a match for some condition in a List<T> or T[].  The methods that tend to use the Predicate<T> include: Find(), FindAll(), FindLast() Uses the Predicate<T> delegate to finds items, in a list/array of type T, that matches the given predicate. FindIndex(), FindLastIndex() Uses the Predicate<T> delegate to find the index of an item, of in a list/array of type T, that matches the given predicate. The signature of the Predicate<T> delegate (ignoring variance for the moment) is: 1: public delegate bool Predicate<T>(T obj); So, this is a delegate type that supports any method taking an item of type T and returning bool.  In addition, there is a semantic understanding that this predicate is supposed to be examining the item supplied to see if it matches a given criteria. 1: // finds first even number (2) 2: var firstEven = Array.Find(numbers, n => (n % 2) == 0); 3:  4: // finds all odd numbers (1, 3, 5, 7, 9) 5: var allEvens = Array.FindAll(numbers, n => (n % 2) == 1); 6:  7: // find index of first multiple of 5 (4) 8: var firstFiveMultiplePos = Array.FindIndex(numbers, n => (n % 5) == 0); This delegate has typically been succeeded in LINQ by the more general Func family, so that Predicate<T> and Func<T, bool> are logically identical.  Strictly speaking, though, they are different types, so a delegate reference of type Predicate<T> cannot be directly assigned to a delegate reference of type Func<T, bool>, though the same method can be assigned to both. 1: // SUCCESS: the same lambda can be assigned to either 2: Predicate<DateTime> isSameDayPred = dt => dt.Date == DateTime.Today; 3: Func<DateTime, bool> isSameDayFunc = dt => dt.Date == DateTime.Today; 4:  5: // ERROR: once they are assigned to a delegate type, they are strongly 6: // typed and cannot be directly assigned to other delegate types. 7: isSameDayPred = isSameDayFunc; When you assign a method to a delegate, all that is required is that the signature matches.  This is why the same method can be assigned to either delegate type since their signatures are the same.  However, once the method has been assigned to a delegate type, it is now a strongly-typed reference to that delegate type, and it cannot be assigned to a different delegate type (beyond the bounds of variance depending on Framework version, of course). Comparison<T> – delegate for determining order Just as the Predicate<T> generic delegate was birthed to give Array and List<T> the ability to perform type-safe matching, the Comparison<T> was birthed to give them the ability to perform type-safe ordering. The Comparison<T> is used in Array and List<T> for: Sort() A form of the Sort() method that takes a comparison delegate; this is an alternate way to custom sort a list/array from having to define custom IComparer<T> classes. The signature for the Comparison<T> delegate looks like (without variance): 1: public delegate int Comparison<T>(T lhs, T rhs); The goal of this delegate is to compare the left-hand-side to the right-hand-side and return a negative number if the lhs < rhs, zero if they are equal, and a positive number if the lhs > rhs.  Generally speaking, null is considered to be the smallest value of any reference type, so null should always be less than non-null, and two null values should be considered equal. In most sort/ordering methods, you must specify an IComparer<T> if you want to do custom sorting/ordering.  The Array and List<T> types, however, also allow for an alternative Comparison<T> delegate to be used instead, essentially, this lets you perform the custom sort without having to have the custom IComparer<T> class defined. It should be noted, however, that the LINQ OrderBy(), and ThenBy() family of methods do not support the Comparison<T> delegate (though one could easily add their own extension methods to create one, or create an IComparer() factory class that generates one from a Comparison<T>). So, given this delegate, we could use it to perform easy sorts on an Array or List<T> based on custom fields.  Say for example we have a data class called Employee with some basic employee information: 1: public sealed class Employee 2: { 3: public string Name { get; set; } 4: public int Id { get; set; } 5: public double Salary { get; set; } 6: } And say we had a List<Employee> that contained data, such as: 1: var employees = new List<Employee> 2: { 3: new Employee { Name = "John Smith", Id = 2, Salary = 37000.0 }, 4: new Employee { Name = "Jane Doe", Id = 1, Salary = 57000.0 }, 5: new Employee { Name = "John Doe", Id = 5, Salary = 60000.0 }, 6: new Employee { Name = "Jane Smith", Id = 3, Salary = 59000.0 } 7: }; Now, using the Comparison<T> delegate form of Sort() on the List<Employee>, we can sort our list many ways: 1: // sort based on employee ID 2: employees.Sort((lhs, rhs) => Comparer<int>.Default.Compare(lhs.Id, rhs.Id)); 3:  4: // sort based on employee name 5: employees.Sort((lhs, rhs) => string.Compare(lhs.Name, rhs.Name)); 6:  7: // sort based on salary, descending (note switched lhs/rhs order for descending) 8: employees.Sort((lhs, rhs) => Comparer<double>.Default.Compare(rhs.Salary, lhs.Salary)); So again, you could use this older delegate, which has a lot of logical meaning to it’s name, or use a generic delegate such as Func<T, T, int> to implement the same sort of behavior.  All this said, one of the reasons, in my opinion, that Comparison<T> isn’t used too often is that it tends to need complex lambdas, and the LINQ ability to order based on projections is much easier to use, though the Array and List<T> sorts tend to be more efficient if you want to perform in-place ordering. Converter<TInput, TOutput> – delegate to convert elements The Converter<TInput, TOutput> delegate is used by the Array and List<T> delegate to specify how to convert elements from an array/list of one type (TInput) to another type (TOutput).  It is used in an array/list for: ConvertAll() Converts all elements from a List<TInput> / TInput[] to a new List<TOutput> / TOutput[]. The delegate signature for Converter<TInput, TOutput> is very straightforward (ignoring variance): 1: public delegate TOutput Converter<TInput, TOutput>(TInput input); So, this delegate’s job is to taken an input item (of type TInput) and convert it to a return result (of type TOutput).  Again, this is logically equivalent to a newer Func delegate with a signature of Func<TInput, TOutput>.  In fact, the latter is how the LINQ conversion methods are defined. So, we could use the ConvertAll() syntax to convert a List<T> or T[] to different types, such as: 1: // get a list of just employee IDs 2: var empIds = employees.ConvertAll(emp => emp.Id); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.ConvertAll(emp => (int)emp.Salary); Note that the expressions above are logically equivalent to using LINQ’s Select() method, which gives you a lot more power: 1: // get a list of just employee IDs 2: var empIds = employees.Select(emp => emp.Id).ToList(); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.Select(emp => (int)emp.Salary).ToList(); The only difference with using LINQ is that many of the methods (including Select()) are deferred execution, which means that often times they will not perform the conversion for an item until it is requested.  This has both pros and cons in that you gain the benefit of not performing work until it is actually needed, but on the flip side if you want the results now, there is overhead in the behind-the-scenes work that support deferred execution (it’s supported by the yield return / yield break keywords in C# which define iterators that maintain current state information). In general, the new LINQ syntax is preferred, but the older Array and List<T> ConvertAll() methods are still around, as is the Converter<TInput, TOutput> delegate. Sidebar: Variance support update in .NET 4.0 Just like our descriptions of Func and Action, these three early generic delegates also support more variance in assignment as of .NET 4.0.  Their new signatures are: 1: // comparison is contravariant on type being compared 2: public delegate int Comparison<in T>(T lhs, T rhs); 3:  4: // converter is contravariant on input and covariant on output 5: public delegate TOutput Contravariant<in TInput, out TOutput>(TInput input); 6:  7: // predicate is contravariant on input 8: public delegate bool Predicate<in T>(T obj); Thus these delegates can now be assigned to delegates allowing for contravariance (going to a more derived type) or covariance (going to a less derived type) based on whether the parameters are input or output, respectively. Summary Today, we wrapped up our generic delegates discussion by looking at three lesser-used delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>.  All three of these tend to be replaced by their more generic Func equivalents in LINQ, but that doesn’t mean you shouldn’t understand what they do or can’t use them for your own code, as they do contain semantic meanings in their names that sometimes get lost in the more generic Func name.   Tweet Technorati Tags: C#,CSharp,.NET,Little Wonders,delegates,generics,Predicate,Converter,Comparison

    Read the article

  • Upgrading Agent Controllers

    - by Owen Allen
    There was an update for Oracle Solaris Agents not too long ago. Over on the Oracle Enterprise Manager blog, Steve Stelting has put together a detailed walkthrough for upgrading your environment. It covers downloading the Agent update, seeing what Agents need to be upgraded, and performing the upgrade itself Speaking of which, the Oracle Enterprise Manager blog often has in-depth posts about Ops Center, so it's well worth a look if you don't follow it.

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • SilverlightShow for Feb 14 - 20, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for Feb 14 - 20, 2011. Way ahead of all other news for the week, in terms of popularity, is the news on the latest Silverlight 4 runtime update. Here are the top 5 news on SilverlightShow for last week: Silverlight 4.0.60129.0 GRD3 Runtime update KB2495644 FloatingWindow v1.2 — multi-windows interface for Silverlight 4 Silverlight MVVM Commanding II Upcoming SilverlightShow Webinar: 'Switch or no switch: Can I build my business apps in LightSwitch?' Kinect and WPF: Painting with Kinect using OpenNI Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Oracle GRC in Leader’s Quadrant on Gartner’s Magic Quadrant for Enterprise Governance Risk and Compliance Platforms

    - by Di Seghposs
    Once again Gartner has recognized Oracle as a Leader in their Magic Quadrant for Enterprise Governance Risk and Compliance (EGRC) Platforms report, stating that “Oracle remains in the Leader’s quadrant based on overall corporate viability, proven execution against its road map, and advanced capabilities to integrate risk management and performance management.”  In the report, Gartner cited that Oracle clearly understands the GRC challenges faced by a number of verticals, and also the trends toward the integration of risk management and performance management.  Gartner produces Magic Quadrant reports to provide guidance to their clients on available solutions in specific categories. This Magic Quadrant reports takes a holistic view of EGRC solutions and based on selected criteria, places vendors in one of the four quadrants - leaders, challengers, visionaries and niche. We are proud to be in the leader category! Click here to read the full report. Congratulations to our product development, strategy, and marketing teams for creating a world-class, market-leading GRC solution! Oracle GRC: Designed to manage risk, improve controls and reduce costs

    Read the article

  • CIC 2010 - Ghost Stories and Model Based Design

    - by warren.baird
    I was lucky enough to attend the collaboration and interoperability congress recently. The location was very beautiful and interesting, it was held in the mountains about two hours outside Denver, at the Stanley hotel, famous both for inspiring Steven King's novel "The Shining" and for attracting a lot of attention from the "Ghost Hunters" TV show. My visit was prosaic - I didn't get to experience the ghosts the locals promised - but interesting, with some very informative sessions. I noticed one main theme - a lot of people were talking about Model Based Design (MBD), which is moving design and manufacturing away from 2d drawings and towards 3d models. 2d has some pretty deep roots in industrial manufacturing and there have been a lot of challenges encountered in making the leap to 3d. One of the challenges discussed in several sessions was how to get model information out to the non-engineers in the company, which is a topic near and dear to my heart. In the 2D space, people without access to CAD software (for example, people assembling a product on the shop floor) can be given printouts of the design - it's not particularly efficient, and it definitely isn't very green, but it tends to work. There's no direct equivalent in the 3D space. One of the ways that AutoVue is used in industrial manufacturing is to provide non-CAD users with an easy to use, interactive 3D view of their products - in some cases it's directly used by people on the shop floor, but in cases where paper is really ingrained in the process, AutoVue can be used by a technical publications person to create illustrative 2D views that can be printed that show all of the details necessary to complete the work. Are you making the move to model based design? Is AutoVue helping you with your challenges? Let us know in the comments below.

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • CODE PROJECT VIRTUAL TECH SUMMIT ON MOBILE DEVELOPMENT &ndash; ON DEMAND

    - by Tiago Salgado
    Who has not seen the Code Project's Tech Virtual Summit on Mobile Development, you can now see all the sessions on demand. The sessions are: The Mobile Development Landscape Android Push Notifications Beginning Android Flash Development Android for .NET/C# Developers Using MonoDroid iPhone 101: Introduction to iPhone and iOS Development Building Rich Mobile Apps with HTML5, CSS3 and JavaScript Building MVVM apps for Windows Phone 7 Using Panorama and Pivot Controls for WP7 apps Building Data Visualization Applications for Windows Phone 7 To access the sessions, you need to register at the following link: http://www.virtualtechsummits.com/Register.aspx?EventID=11

    Read the article

  • Twin Cities Code Camp 8 Retrospective

    - by Lee Brandt
    I just got back (a few hours ago) from Minneapolis, where I was speaking at the Twin Cities Code Camp 8. I’d never been to a Twin Cities Code Camp, and I have always heard such great things, so I submitted and got accepted to speak. The conference (what I got to see) was great. My talk was pretty short on people, but there are many reasons for that. First, I spoke opposite Donn Felker (speaking about developing for Android) and Keith Dahlby (speaking about Dynamic .NET). So of course, my talk is going to be empty. How could I compete with that? Plus, my talk was about software process improvement, specifically about how our process has evolved. Maybe not the smartest idea to submit to talk about software process at a developer’s conference. The people who DID attend however, seemed to really enjoy the talk. There was good interaction and good, thoughtful questions. So the attendees seemed engaged. I actually did get a chance to go to one session. I went and saw Javier Lozano talk about Open source tools for ASP.NET MVC. I am hip-deep in MVC stuff right now and getting up to speed on MVC 2 as well. I learned about MVC Turbine, Javier’s Open Source project. I will definitely be adding it to my MVC arsenal. Thanks Javier! I did forget my AC adapter for my laptop and got a little lost in Minneapolis on my way to get one from MicroCenter Saturday morning, but other than that, it was a great trip. It’s a long drive, but seeing all the guys and getting two Nut & Honey rolls from Roly Poly in Eden Prarie for lunch on Saturday made the trip totally worth it. I look forward to seeing what Jason & Chris come up with for next year! Thanks for having me guys!

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • Diagnosing ADF Mobile iOS deployment problems

    - by Chris Muir
    From time to time I encounter customers who have taken possession of a brand new Apple Mac, have that excited "I've just spent more on a computer then I ever wanted to but it's okay" crazy gleam in their eye, but on pre-loading all the necessary software for Oracle's ADF Mobile to start their mobile campaign, following Oracle's setup instructions and deploying their first app to Apple's XCode iPhone Simulator they hit this error message in the JDeveloper Log-Deployment window: [01:36:46 PM] Deployment cancelled. [01:36:46 PM] ----  Deployment incomplete  ----. [01:36:46 PM] Failed to build the iOS application bundle. [01:36:46 PM] Deployment failed due to one or more errors returned by '/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild'.  The following is a summary of the returned error(s): Command-line execution failed (Return code: 69) "Oh, return code 69, I know that well" I hear you say.  Admittedly the error code is less than useful besides drawing some titters from the peanut gallery. Before explaining what's gone wrong, I think it's useful to teach customers how to diagnose these issues themselves.  When ADF Mobile commences a deployment, be it to Apple's iOS or Google's Android platforms, JDeveloper and ADF Mobile do a good job in the Log window of showing you what the deployment process entails.  In the case of deploying to iOS the log window will literally include the XCode commands executed to complete the deployment cycle. As example here's the log output that was produced before the error message was raised.... take the opportunity to read this line by line and note the command line calls highlighted in blue: (Note some of the following lines have been split over multiple lines to suit reading on this blog, each original line is preceded by a timestamp. Ensure to check the exact commands from JDev) [01:36:33 PM] Target platform is (iOS). [01:36:33 PM] Beginning deployment of ADF Mobile application 'LayoutDemo' to iOS using profile 'IOS_MOBILE_NATIVE_archive1'. [01:36:34 PM] Command-line executed: [/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild, -version] [01:36:34 PM] Command-line execution succeeded. [01:36:34 PM] Running dependency analysis... [01:36:34 PM] Building... [01:36:34 PM] Deploying 3 profiles... [01:36:35 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ApplicationController/ deploy/ApplicationController.jar [01:36:35 PM] WARNING: No Resource Catalog enabled ADF components found to package [01:36:36 PM] Wrote Archive Module to /Users/chris/fmw/jdeveloper/jdev/extensions/ oracle.adf.mobile/Samples/PublicSamples/LayoutDemo/ViewController/ deploy/ViewController.jar [01:36:36 PM] Verifying existence of the .adf source directory of the ADF Mobile application... [01:36:36 PM] Verifying Application Controller project exists... [01:36:36 PM] Verifying application dependencies... [01:36:36 PM] The application may not function correctly because the following dependent libraries are missing: /Users/chris/jdev/jdeveloper/jdeveloper/jdev/extensions/oracle.adf.mobile/ lib/adfmf.springboard.jar [01:36:36 PM] Verifying project dependencies... [01:36:36 PM] Validating application XML files... [01:36:36 PM] Validating XML files in project ApplicationController... [01:36:36 PM] Validating XML files in project ViewController... [01:36:40 PM] Copying common javascript files... [01:36:41 PM] Copying FARs to the ADF Mobile Framework application... [01:36:41 PM] Extracting Feature Archive file, "ApplicationController.jar" to deployment folder, "ApplicationController". [01:36:42 PM] Extracting Feature Archive file, "ViewController.jar" to deployment folder, "ViewController". [01:36:42 PM] Deploying skinning files... [01:36:43 PM] Copying the CVM SDK files built for the x86 processor... [01:36:43 PM] Copying the CVM JDK files built for the x86 processor... [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/x86/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/ Samples/PublicSamples/ LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Command-line executed: [cp, -R, -p, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/iOS/jvmti/jar/, /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/ temporary_xcode_project/lib] [01:36:43 PM] Command-line execution succeeded. [01:36:43 PM] Copying security related files to the ADF Mobile Framework application... [01:36:44 PM] Command-line executed from path: /Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/temporary_xcode_project/ [01:36:44 PM] Command-line executed: /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild clean install -configuration Debug -sdk /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/ Developer/SDKs/iPhoneSimulator6.1.sdk DSTROOT=/Users/chris/fmw/jdeveloper/jdev/extensions/oracle.adf.mobile/Samples/ PublicSamples/LayoutDemo/deploy/IOS_MOBILE_NATIVE_archive1/Destination_Root/ IPHONEOS_DEPLOYMENT_TARGET=5.0 TARGETED_DEVICE_FAMILY=1,2 PRODUCT_NAME=LayoutDemo ADD_SETTINGS_BUNDLE=NO As you can see when we move from JDeveloper undertaking its work, it then passes the code off in the last few lines for Apple's XCode to assemble and deploy the required .ipa file.  From the original error message which followed this complaining about xcodebuild failing with return code 69, we can quickly see the exact command line used to call xcodebuild. As this is the exact command line call with all its options, you're free to open a Terminal window in Mac OSX and execute the same command by simply copying and pasting the command line. And via this you'll then find out what return code actually 69 means.  Unfortunately it's not that exciting. For Macs that have just been installed and configured with XCode, XCode (and for that matter iTunes) which is required by ADF Mobile to deploy must have been run at least once before hand on your brand new Mac (to be clear that's once ever, not once every restart). On doing so you will be presented with a license agreement from Apple that you must accept. Only once you've done this will the command line calls work.  They're currently failing as you haven't accepted the legal terms and conditions. (arguably you an also accept the terms and conditions from the command line too, but ADF Mobile cannot do this on your behalf, so it's just easier to open the tools and confirm the legal requirements that way). Putting aside the error code and its meaning, watching the log window, watching what commands are executed, learning what they do, this will assist you to diagnose issues yourself and solve these sort of issues more relatively quickly.  From my perspective as an Oracle Product Manager, it allows me to say "this is the stuff you don't need to worry about when you use ADF Mobile when it's configured correctly" .... as you can see my salesman qualities shine through. For anyone who is happily using ADF Mobile on a Mac and wondering why you didn't hit these issues, it's quite likely that you already accepted the license conditions before deploying via ADF Mobile.  For instance, though I'm not a fan of iTunes itself, iTunes was one of the first things I loaded on my Mac to access my Justin Bieber albums. Image courtesy of winnond / FreeDigitalPhotos.net

    Read the article

  • Spotlight on a career path: Paul, Business Development Consultant

    - by Maria Sandu
    I came to work for Oracle in November 2012 as a Customer Intelligence Representative and since then I was promoted to a Business Development Consultant, for Commercial Industries in the UK, based in Dublin. My background was primarily in Logistics, working for such companies as Indaver Ireland, Wincanton and P&O. I spent 10 years working in this industry and gained experience in negotiating with customers and suppliers in order to meet the needs of both, monitoring the quality and quantity of goods as well as the efficiency and organisation of the movement and storage of products. I decided to move from my logistics career in 2009 to study Information Technology in D.I.T. This was a challenge for me to move my career path; however the lectures at the college helped me significantly with the ability to understand how IT can have an effect on how businesses operate. Following on from college I came to work for Oracle. This also presented challenges but the training I received and the encouragement from management helped me understand that the same business rules apply no matter what background you come from. I have also learnt that using my past experience in working with customers and suppliers in Logistics has helped me understand how to meet customer’s needs. Oracle has offered me excellent training such as Sandler Sales Techniques and John Costigan. I continue to get all the training that I need to develop my career. If you’re interested in joining the Business Development Group visit http://bit.ly/oracledirectcareers or follow our CareersatOracle Facebook Community! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Was ist EPPM?

    - by britta wolf
    Unternehmen müssen schnell auf kurzfristige Veränderungen am Markt und veränderte Projekt- und Programmvoraussetzungen reagieren und gleichzeitig für klare Strukturen und umsetzbare Informationen in heterogenen Projekt-Teams sorgen. EPPM-Lösungen bieten Organisationen mit vielen Projekten die Möglichkeit einer intelligenten Programm- und Projektverwaltung — von kleinen und einfachen bis hin zu großen und komplexen Vorhaben. ermöglichen fundiertere Entscheidungen im Portfolio-Management und umfassende Einblicke in relevante Informationen in Echtzeit. Es lassen sich Risiken und Chancen von Projekten und Programmen durch Funktionen in den Bereichen Projektmanagement, Zusammenarbeit, Kontrolle zuverlässig bewerten. So können Projekte zeit- und budgetgerecht und in der vorgesehen Qualität und Form durchgeführt werden. Enterprise Project Portfolio Management – Die Synergie von Portfolio Management und Project Management mit Oracle Primavera ! Gerne stellen wir Ihnen auf Mailanfrage diese beiden u.a. Präsentationen zur Verfügung: Enterprise Project Portfolio Management  - die richtigen Projekte auswählen und diese dann richtig durchführen Portfolio and Project Management - Investitionsideen sammeln, Szenarien planen, die richtigen Investitionen auswählen und bis zum Lebenszyklusende steuern Wir empfehlen Ihnen ausserdem diese Links: Deutsche Oracle Primavera Webseite - Schnelleinführung ins Thema, Webcasts, Veranstaltungen, Info zur Online Community, Broschüren/Whitepapers usw. Oracle's Primavera Resource Library - Ausführliche Sammlung von englischen Demos,Web-und Podcasts, Broschüren, Special Reports usw.

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of May 26

    - by Roxana Babiciu
    Calypso Technology announced that Calypso version 14 has achieved Oracle Exadata Optimized status through OPN. In simulations of data-intensive straight through-processing tasks, Calypso achieved performance gains of up to 500% using Exadata hardware – Read more Infosys achieves Oracle SuperCluster Optimized status with Finacle, a core banking solution. Finacle can process 6x the volume of transactions currently processed by the entire US banking system – Read more

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >