Search Results

Search found 12449 results on 498 pages for 'ebs e business suite'.

Page 378/498 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Experience the Oracle Support Stars Bar

    - by Oracle OpenWorld Blog Team
    By Gina WolfDon't miss the opportunity to meet with the stars of Oracle Support, live and in person at the Moscone West Level 2 lobby. Ask our experts your toughest questions about the Oracle hardware, software, and engineered systems you use to run your business. Explore new Oracle Support innovations including Oracle Platinum Services, My Oracle Support Mobile, and the Oracle Enterprise Manager Ops Center Everywhere program. Learn the latest best practices for problem prevention, rapid resolution, and product upgrades. In addition, discover how Oracle Advanced Customer Support Services can help you maximize the performance of all mission-critical Oracle systems. Come meet the stars behind your support: our trusted experts are ready to assist! The Oracle Support Stars Bar at the Moscone West Level 2 lobby is open all conference week at the following times: Sunday, September 30, 12:00 p.m. – 4:00 p.m. Monday, October 1, 10:00 a.m. – 6:00 p.m. Tuesday, October 2, 10:00 a.m. – 6:00 p.m. Wednesday, October 3, 9:00 a.m. – 5:00 p.m. Thursday, October 4, 9:00 a.m. – 1:00 p.m. Attend one or more of the 27 Oracle Customer Support Services sessions during Oracle OpenWorld to learn how Oracle Support enables you to gain maximum value from your Oracle hardware and software investments.

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Top Ten Reasons to Attend the 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Need justification to attend the 2015 Oracle Value Chain Summit? Check out these Top Ten Reasons you should register now for this event: 1. Get Results: 60% higher profits. 65% better earnings per share. 2-3x greater return on assets. Find out how leading organizations achieved these results when they transformed their supply chains. 2. Hear from the Experts: Listen to case studies from leading companies, and speak with top partners who have championed change. 3. Design Your Own Conference: Choose from more than 150 sessions offering deep dives on every aspect of supply chain management: Cross Value Chain, Maintenance, Manufacturing, Procurement, Product Value Chain, Value Chain Execution, and Value Chain Planning. 4. Get Inspired from Those Who Dare: Among the luminaries delivering keynote sessions are former SF 49ers quarterback Steve Young and Andrew Winston, co-author of one of the top-selling green business books, Green to Gold. 5. Expand Your Network: With 1500+ attendees, this summit is a networking bonanza. No other event gathers as many of the best and brightest professionals across industries, including tech experts and customers from the Oracle community. 6. Improve Your Skills: Enhance your expertise by joining NEW hands-on training sessions. 7. Perform a Road-Test: Try the latest IT solutions that generate operational excellence, manage risk, streamline production, improve the customer experience, and impact the bottom line. 8. Join Similar Birds-of-a-Feather: Engage industry peers with similar interests, or shared supply chain communities, in expanded roundtable discussions. 9. Gain Unique Insight: Speak directly with the product experts responsible for Oracle’s Value Chain Solutions. 10. Save $400: Take advantage of the Super Saver rate by registering before September 26, 2014.

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • Oracle Knowledge Courses

    - by mseika
    Oracle Knowledge products offer simple and convenient ways for users to access knowledge contained in corporate information stores. With Oracle Knowledge Training, you learn how to utilize tools that improve customer service and satisfaction by helping customers find more relevant answers to questions online or from a service agent guided by a scalable knowledge management platform. The following courses have been scheduled at Oracle in Utrecht: Oracle Knowledge Overview Rel 8.5 (1 day) Learn the technical architecture of Oracle Knowledge at a high-level and the key technologies including InfoCenter, iConnect, Search, Information Manager, Answerflow and Analytics. Dates: to be scheduled Knowledge Technical Architecture and Configuration Rel 8.5 (5 days) Learn to implement and maintain Oracle Knowledge’s core technologies through hands-on exercises including Intelligent Search, Information Manager, iConnect, AnswerFlow and Analytics. Dates: 13-17 January 2014 (afternoon/evening) Location: Live Virtual Class Knowledge Content Administration Rel 8.5 (2 days) Learn to implement, use and manage knowledge and content creation with Oracle Knowledge Information Manager. Dates: 4-5 December 2013 Location: Utrecht, The Netherlands Knowledge Analytics Rel 8.5 (1 day) Learn KPI analyses and how to close gaps using reports and tools provided in Oracle Business Intelligence Enterprise Edition. Dates: 6 December Location: Utrecht, The Netherlands Remember: your OPN discount is always applied to the standard prices shown on the Oracle University web pages. For assistance in booking, scheduling requests and more information contact the Education Service Desk: eMail: [email protected] Telephone: +31 30 66 27 675

    Read the article

  • Can you say "Architect?"

    - by Bob Rhubart
    Photo by Jennifer Ortiz In his article, It's Time To Occupy IT, AIIM CEO and president John Mancini examines the evolution of "Systems of Engagement," the social technologies that are transforming how customers and employees relate to and interact with companies. Surviving the disruption that transformation entails is a matter of when, rather than if, a given organization embraces the change. But as Mancini points out, that transformation will require a "new breed" of IT professional: "While addressing this kind of challenge requires technical skills, it also requires process and customer acumen more often found in the business than in our IT departments. It requires a new type of information professional, whose expertise includes technical and domain knowledge, but who also has an idea of how the pieces of a process that spans the worlds of Systems of Record and Systems of Engagement should fit together. Gartner estimates that the demand for this new breed of information professional will grow by 50 percent by 2015." Though Mancini makes no reference to the title, the skills he desribes are those of the IT architect. While the specific definition of the role remains fodder for seemingly endless discussion and debate on various social networks and forums, the fact remains that the skills required for success in the evolving world of IT will increasingly involve a deep understanding of how all the pieces fit together.

    Read the article

  • We have moved to larger offices

    - by Chris Houston
    First of all we should probably apologise for the complete lack of blogging over the last 6 months! As web developers we are constantly telling our clients that they should keep their blogs up to date and it seems we have been ignoring our own advice.That being said, we have been very busy moving offices and helping our new host QV Offices setup their new business. As well as all the moving we have not been sitting on our hands, we have built the new site for DairyMaster over in Ireland as well as a separate private website for their global distributor network.As Umbraco Gold Partners we have found more and more that we are working on projects where we are the silent development partners, so although we cannot talk publicly about a lot of the sites we develop, we have some real beauties now in our portfolio :)Now that the dust has settled in our new office ( and has been hovered up! ) we are read for the new year and are looking forward to working on some exciting projects that are currently in the pipeline.We are also intending to run some Hacking sessions for Umbraco as we now have lots of space for developers to come and work with us, so if you have any ideas of a theme for an Umbraco Hackathon then do let us know.And with that it just remains to say Happy Christmas to you all and see you in the new year!

    Read the article

  • Oracle releases ADF Mobile with Java ME CDC for iOS and Android

    - by hinkmond
    Finally. Oracle has released a new product that I've worked on for a while now. Oracle ADF Mobile is available for iOS and Android bringing Java ME CDC technology to iPhones and Android devices all over the world. Woot! Java. On iPhone and Android. Yeah, it's like that. See: Java and HTML5 on SmartPhones Here's a quote: Oracle announced the availability of Oracle ADF Mobile – a framework the enables the development of hybrid applications for mobile devices. Oracle ADF Mobile uses Java and HTML5 and enables developers to develop a single application that installs and runs on both iOS and Android systems. Java - Application logic is developed with the Java language. Oracle brings a lightweight Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) Gosh, you'd think it was a big deal. Well, it was! So, go download yours today! Hinkmond

    Read the article

  • Career opportunities for mid-20 .Net developer

    - by Valera Kolupaev
    Recently, I have moved to Toronto and started exploring career opportunities here. My first impressions about .net developer/architect career are really controversial. Here options that comes to my mind right now: Grow as a developer, lead and solution architect in large and well-known company, like Logitech or IBM. Doing .net development medium size (10-30) software shops Joining some start-up guys First one, seems very bureaucratic with kills all programming fun, that is such valuable to me. And there is not a lot of start ups, that are based on MS technology stack. Good mid-size company seems like a best fit to me, since I can have a lot of fun, doing new projects. Previously I have been working at large (5000+) outsourcing provider as a .Net developer. I was kind of a 'vanilla' time, because our team were always doing massive scale projects from scratch, on latest .Net stack. I would really appreciate if you share pros and cons of path, that you have chosen and what you value most in your current project. I'll start: Pros for Mid-size You are really close to business and application consumers, without all bureaucratic papers Cons It seems, that career oportunities of vertical growth is rather limited, once I have to switch to my own company or join development team of some big players.

    Read the article

  • Oracle Enterprise Manager 12c Anniversary at Open World General Session and Twitter Chat using #em12c on October 2nd

    - by Anand Akela
    As most of you will remember, Oracle Enterprise Manager 12c was announced last year at Open World. We are celebrating first anniversary of Oracle Enterprise Manager 12c next week at Open world. During the last year, Oracle customers have seen the benefits of federated self-service access to complete application stacks, elastic scalability, automated metering, and charge-back from capabilities of Oracle Enterprise manager 12c. In this session you will learn how customers are leveraging Oracle Enterprise Manager 12c to build and operate their enterprise cloud. You will also hear about Oracle’s IT management strategy and some new capabilities inside the Oracle Enterprise Manager product family. In this anniversary general session of Oracle Enterprise Manager 12c, you will also watch an interactive role play ( similar to what some of you may have seen at "Zero to Cloud" sessions at the Oracle Cloud Builder Summit ) depicting a fictional company in the throes of deploying a private cloud. Watch as the CIO and his key cloud architects battle with misconceptions about enterprise cloud computing and watch how Oracle Enterprise Manager helps them address the key challenges of planning, deploying and managing an enterprise private cloud. The session will be led by Sushil Kumar, Vice President, Product Strategy and Business Development, Oracle Enterprise Manager. Jeff Budge, Director, Global Oracle Technology Practice, CSC Consulting, Inc. will join Sushil for the general session as well. Following the general session, Sushil Kumar ( Twitter user name @sxkumar ) will join us for a Twitter Chat on Tuesday at 1:00 PM to 2:00 PM.  Sushil will answer any follow-up questions from the general session or any question related to Oracle Enterprise Manager and Oracle Private Cloud . You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c (Needs Twitter credential for participating).  You could pre-submit your questions for Sushil using any of the social media channels mentioned below. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • WebLogic 12c and Mobile Platform sales kits

    - by JuergenKress
    At our WebLogic Community Workspace (WebLogic Community membership required) you can find the latest sales plays to update your sales team. Kits include a overview presentation to train your sales teams, cheat sheets for your pocket and customer ppt presentations: WebLogic 12c FY15 sales resources FY15 CAF Sales Opportunities Webcast - PPT WebLogic Platform-as-a-Service | Customer Presentation Cheat Sheet WebLogic Coherence Best for Oracle Database - Customer Presentation | Cheat Sheet Capture New Java Projects - Customer Presentation | Cheat Sheet Upsell EM for WebLogic - Customer Presentation | Cheat Sheet WebLogic for ODA - Customer Presentation | Cheat Sheet Mobile Platform 12c FY15 sales resources FY15 Oracle Mobile Platform Sales Opportunities Webcast -| PPT Oracle Mobile Strategy (CVC Deck) - Customer Presentation Develop New Mobile Apps - Customer Presentation | Cheat Sheet Mobilize Enterprise Apps - Customer Presentation | Cheat Sheet Mobile Security - Customer Presentation | Cheat Sheet Download: FY15 Mobile Sales Play Content - ZIP (61Mb) Please use these documents in the spirit of our joint partnership. Please do NOT publish any WebLogic 121.3 and the Mobile Platform details before general availability. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,sales,sales plays,sales kit

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • Why does everybody hate SharePoint?

    - by Ryan Michela
    Reading this topic about the most over hyped technologies I noticed that SharePoint is almost universally reviled. My experience with SharePoint (especially the most recent versions) is that it accomplishes it's core competencies smartly. Namely: Centralized document repository - get all those office documents out of email (with versioning) User-editible content creation for internal information disemination - look, an HR site with current phone numbers and the vacation policy Project collaboration - a couple clicks creates a site with a project's documents, task list, simple schedule, threaded discussion, and possibly a list of all project related emails. Very basic business automation - when you fill out the vacation form, an email is sent to HR. My experience is that SharePoint only gets really ugly when an organization tries to push it in a direction it isn't designed for. SharePoint is not a CRM, ERP, bug database or external website. SharePoint is flexible enough to serve in a pinch, but it is no replacement for a dedicated tool. (Microsoft is just as guilty of pushing SharePoint into domains it doesn't belong.) If you use SharePoint for what it's designed for, it really does work. Thoughts?

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • Non-Profit Technololgy for Non-Profits?

    - by TomJ
    I've been looking around for a way to give back to the community, but I haven't found my right fit yet, so an idea came to mind: A non-profit technology "company" that targets non-profits. Do these exist? I've been doing some google searches and can only find software that is targeted for non-profits that is created by for-profit companies or that charges what I believe to be an outrages amount, conferences directed towards non-profits and technology they may use -- or articles complaining about the digital divide and how non-profits view technology as key but dont have the funds or the knowledge to employ it. Pseudo "Business Model" An open source 501(3)(c) organization that targets directly targets non-profits to fill the "digital divide." Most services would be free and consulting fees would be charged for customization. Donations would be accepted and government grants would be sought after. This would enable non-profits to keep pace with the for-profits in the technology sector, but at little to no cost. Perhaps the first "industry" to be targeted would be those that fill key social needs like unemployment, or food banks.

    Read the article

  • Extending an ABCS

    - by jamie.phelps
    All AIA Application Business Connector Services (ABCS) are extension enabled out of the box. The number and location of extension points in each ABCS is dependent upon whether the ABCS is a request-response or fire-and-forget service. Below is an example of a request-reply ABCS with 4 extension call-out points: Pre-transformationPost-transformation, Pre-invokePost-invoke, Pre-transformationPost-transformation, Pre-reply You can also see in the diagram that each XSL Transformation has it's own extension call-out. However for now we are only discussing the ABCS extension call-outs. To extend an ABCS, you'll first need to identify the specific extension points that are available in your ABCS and choose the one or more that you want to implement. You can an get an idea of the extension points available in your ABCS by looking into the AIAConfigurationProperties.xml file found under the AIA_HOME/config directory. Find the for your ABCS and look for properties similar to the following: false false false false Each extension point in the ABCS will have a corresponding configuration property to control whether or not the extension call-out is active at runtime. So these properties can give you some idea of what extension points are available in your ABCS. However, you'll probably also want to look into the ABCS BPEL code itself to confirm the exact location of the call-out.

    Read the article

  • Workshop CUOA-Oracle Hyperion "Pianificazione economico-finanziaria, reporting e performance management" - Altavilla Vicentina, 25/10/2012

    - by Edilio Rossi
    Più di 100 professsionisti -  manager della funzione Amministrazione, Finanza e Controllo in azienda e consulenti del settore - hanno partecipato al Workshop, organizzato da CUOA e Oracle Hyperion, in collaborazione con Adacta Studio Associato. E' stata un'occasione unica per approfondire i temi della pianificazione "estesa" e del controllo di gestione nelle imprese italiane - piccole, medie e grandi - alternando chiavi di lettura diverse (accademica, consulenziale, tecnologico-applicativa e utenti) ma tutte legate dal filo conduttore dell'evoluzione dei modelli, degli strumenti e dell'utilizzo dei sistemi evoluti di planning e budgeting economico-finanziario e patrimoniale. Una particolare attenzione è stata posta sul rapporto banca-impresa alla luce dell'attuale crisi e di come i sistemi innovativi di performance management e business intelligence possono aiutare il management nel ridisegno del sistema di finanziamento delle aziende e nella negoziazione con i diversi stakeholders. Grazie alle testimonianze dei casi aziendali GIV (Gruppo Italiano Vini) e Datalogic si è potuto "toccare con mano" l'utlizzo dei modelli e degli strumenti di pianificazione e controllo in realtà aziendali diverse ma che affrontano entrambe alcune delle sfide che i mercati oggi pongono alle imprese italiane.  Le presentazioni sono disponibili su richiesta inviando una mail a: paolo.leveghi-AT-oracle.com

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >