Search Results

Search found 2827 results on 114 pages for 'warehouse builder'.

Page 65/114 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Windows CE support forums

    - by Valter Minute
    As you may already know, microsoft is moving its communities to a new platform, based on web forums. The “old” USENET newsgroups are going to be replaced by web forums organized by categories. One for low-level platform development (for the guys working with Platform Builder), another one for native application development (for people writing their real-time applications in C/C++ or people using Silverlight for Windows Embedded) and, last but not least, a forum dedicated to managed application development (for people that are quickly developing their applications with the .NET Compact Framework). You can find the new forums here: http://social.msdn.microsoft.com/Forums/en-US/category/windowsembeddedcompact Read the “welcome” message to choose the best forum for your question and never forget that to have a good answer you should ask your question in a good way. Here you can find some suggestions about how you can ask questions (in newsgroup or forums): http://guruce.com/blogpost/howtoaskquestionsonnewsgroups (Thanks to Michel Verhagen for the article)

    Read the article

  • How to create a fountain in UDK

    - by user36425
    I'm trying to make a fountain in my level in UDK, I made the base of the fountain by using a Cylinder build and now I'm trying to put water in it. I went to use the fluidSurfaceActor but I notice that this is square but my fountain is a cylinder. Is there a way that I can change the shape of the fluidSurfaceActor to fit the builder brush shape or is there another way to do this? Or is it hopeless and I have to make my fountain into a cube? Here is a link/picture to the screenprint of what I'm talking about:

    Read the article

  • Oracle Fusion Middleware 12c Updates (2013/10/17)

    - by Hiro
    Oracle Fusion Middleware 12c Media Pack ?????2013/10/17 ???????????????? 1. Oracle Tuxedo?? Oracle Tuxedo System and Applications Monitor Plus 12c Release 1 (12.1.1.1) for Oracle Tuxedo 9.1 for Linux x86 (32-bit) Oracle Tuxedo System and Applications Monitor Plus 12c Release 1 (12.1.1.1) for Oracle Tuxedo 9.1 for HP-UX PA-RISC (32-bit) Oracle Tuxedo System and Applications Monitor Plus 12c Release 1 (12.1.1.1) for Oracle Tuxedo 11.1.1.2.0 for HP-UX Itanium (64-bit) Oracle Service Architecture Leveraging Tuxedo (SALT) 12cR1 (12.1.1.0) for HP-UX Itanium (64-bit) ??????????????? ???3????????????????·?????????????????????????????? (OS)???????Oracle Tuxedo 9.1, ????11.1.1.2.0 ????????????????Oracle Tuxedo System and Applications Monitor Plus (TSAM Plus) 12c?????? ???Oracle Service Architecture Leveraging Tuxedo (SALT) 12c???HP-UX Itanium????????????? 2. Data Integration?? Oracle Data Integrator 12c (12.1.2.0.0) Oracle Enterprise Data Quality 9.0.8 ??????????????? Oracle Data Integrator 12c????????????????? ????????????????·???????????????????????????·???????????????????????????? ???????????? Oracle GoldenGate???????????????E-LT (Extract, Load, Transform) ???????·?????????????·???????????????????????????????? ???·??????????·?????????????????????????????????????????·????????????????????????? Oracle Warehouse Builder???????????????????????????????Oracle Data Integrator?????????????????? Oracle Enterprise Manager Cloud Control 12c?????????????????IT??????? ????????????????????????????????????????????????·????????? ???(??)????·???????Oracle Data Integrator 12c????Oracle GoldenGate 12c??????????????????? ????????Oracle Data Integrator??????(??)?????·???????????????????????????? ?????

    Read the article

  • Interesting sessions/tips from RMOUG

    - by jean-pierre.dijcks
    One of the sessions I was at at last week's RMOUG was a session on Temp Tablespace Groups. I had a look because I had no experience with this and it seemed to help with parallel processing and the allocation/usage of temp. You can read the excellent write-up at Kellyn Pedersen's blog - who did the session and all the work - here. So for all of those who may be seeing lot's of waits like enq: TS - Contention when you are doing hash joins and sorts, do have a look at the above blog post. I also had the chance to listen in at Stewart Bryson's session on Restartability (he had 3 R-s) where he gave very useful tips about how to deal with your data warehouse loads. Questions like archive log mode - should I or should I not were well covered. Flashback archives, also nice to hear about. Very nice talk, very interesting. Unfortunately he hasn't blogged about it yes, so no pointers to that one. Got to see a couple of other interesting sessions, and as conferences go got to meet some interesting Oracle folks from the region. As usual RMOUG was useful and fun. Off to the drawing boards to design next year's session!

    Read the article

  • Oracle at the Gartner BI Summit Next Week

    - by kimberly.billings
    We're heading back to Vegas next week - this time for the Gartner Business Intelligence Summit April 12 - 14, 2010 at the Mandalay Bay Resort. Be sure to attend our Customer Case Study session featuring Beckman Coulter, Tuesday, April 13th at 9:45, then swing by our booth to have all your questions answered by Oracle BI and data warehousing experts. We will also be scheduling Face-to-Face meetings with Oracle product executives, so if you would like to schedule a meeting, submit a request via the online agenda builder and Gartner will arrange a meeting with the appropriate Oracle contact. To view the agenda and to find out more about the Gartner BI Summit, visit: http://www.gartner.com/it/page.jsp?id=1118023

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Oracle Cloud Solutions @ Cloud Expo East (June 10-12)

    - by Gene Eun
    Oracle is proud to be the Platinum Sponsor at next week's Cloud Expo East (June 10-12) at the Javits Center in New York City.  This is the fourth consecutive year Oracle has sponsored Cloud Expo. As in years past, Oracle has a full schedule of sessions shown below. We'd love to have you be our guest at Cloud Expo East and have you attend one of our sessions and hear more about our thought leadership and leading solutions in the Cloud and Big Data. We'll also have booth #207, so please stop by and see a demo of many of our cloud offerings. Date  Time  Session Title  Track  Room Tuesday, June 10 4:40 pm - 5:15 pm Top 5 Best Practices for your Application Platform As a Service Cloud Business and the API Economy | Deploying the Cloud TBD Wednesday, June 11 9:10 am - 10:10 am Cloud Odyssey:  A Hero’s Quest All Tracks (Keynote) Keynote Hall Wednesday, June 11 10:15 am - 10:45 am Big Data Management System: Smart SWL Processing Across Hadoop and Your Data Warehouse All Tracks (General Session) Keynote Hall Wednesday, June 11 2:50 pm - 3:25 pm Plug into the Cloud: Your Blueprint to Database as a Service Mobile | Hot Topics TBD Wednesday, June 11 2:50 pm - 3:25 pm From Supply-led to Demand-led: Lead Your IT to Better Serve Your Users Cloud Business and the API Economy | Deploying the Cloud TBD Thursday, June 12 2:50 pm - 3:25 pm Reduce Complexity and Accelerate Innovation with IaaS and PaaS Cloud Business and the API Economy | Deploying the Cloud TBD At Cloud Expo East, you'll get to learn about and experience the latest in Cloud and Big Data. If you don't have a pass to Cloud Expo, no problem. Oracle is giving away FREE VIP Gold Passes! We would love to have you attend Cloud Expo on us. Just go to Oracle's Cloud Expo 2014 event registration page and follow the instructions for a complimentary pass. Stay tuned to this blog and follow us on Twitter (@OracleCloudZone) during and after Cloud Expo for more insight and observations about this year's conference.

    Read the article

  • SAP Applications Certified for Oracle SPARC SuperCluster

    - by Javier Puerta
    SAP applications are now certified for use with the Oracle SPARC SuperCluster T4-4, a general-purpose engineered system designed for maximum simplicity, efficiency, reliability, and performance. "The Oracle SPARC SuperCluster is an ideal platform for consolidating SAP applications and infrastructure," says Ganesh Ramamurthy, vice president of engineering, Oracle. "Because the SPARC SuperCluster is a pre-integrated engineered system, it enables data center managers to dramatically reduce their time to production for SAP applications to a fraction of what a build-it-yourself approach requires and radically cuts operating and maintenance costs." SAP infrastructure and applications based on the SAP NetWeaver technology platform 6.4 and above and certified with Oracle Database 11g Release 2, such as the SAP ERP application and SAP NetWeaver Business Warehouse, can now be deployed using the SPARC SuperCluster T4 4. The SPARC SuperCluster T4-4 provides an optimized platform for SAP environments that can reduce configuration times by up to 75 percent, reduce operating costs up to 50 percent, can improve query performance by up to 10x, and can improve daily data loading up to 4x. The Oracle SPARC SuperCluster T4-4 is the world's fastest general purpose engineered system, delivering high performance, availability, scalability, and security to support and consolidate multi-tier enterprise applications with Web, database, and application components. The SPARC SuperCluster T4-4 combines Oracle's SPARC T4-4 servers running Oracle Solaris 11 with the database optimization of Oracle Exadata, the accelerated processing of Oracle Exalogic Elastic Cloud software, and the high throughput and availability of Oracle's Sun ZFS Storage Appliance all on a high-speed InfiniBand backplane. Part of Oracle's engineered systems family, the SPARC SuperCluster T4-4 demonstrates Oracle's unique ability to innovate and optimize at every layer of technology to simplify data center operations, drive down costs, and accelerate business innovation. For more details, refer to Our press release Datasheet: Oracle's SPARC SuperCluster T4-4 (PDF) Datasheet: Oracle's SPARC SuperCluster Now Supported by SAP (PDF) Video Podcast: Oracle's SPARC SuperCluster (MP4)

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • ArchBeat Link-o-Rama for 10-17-2012

    - by Bob Rhubart
    This is your brain on IT architecture. Oracle Technology Network Architect Day in Los Angeles, Oct 2 Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. Register now. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. OTN APAC Tour 2012: Bangkok, Thailand - Oct 22, 2012 Mike Dietrich shares information on the upcoming OTN APAC Tour stop in Bangkok. Registration is open. Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster | Giri Mandalika Giri Mandalika shares an overview of a new Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3.. As Giri explains, "This solution was centered around the engineered system, SPARC SuperCluster T4-4." The Oldest Big Data Problem: Parsing Human Language | The Data Warehouse Insider Dan McClary offers up a new whitepaper "which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance." Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Ease the Chaos with Automated Patching: Enterprise Manager Cloud Control 12c | Porus Homi Havewala This new OTN article is excerpted from Porus Homi Havewala's latest book, Oracle Enteprise Manager Cloud Control 12c: Managing Data Center Chaos (2012, Packt Publishing). Thought for the Day "Never make a technical decision based upon the politics of the situation, and never make a political decision based upon technical issues." — Geoffrey James Source: softwarequotes.com

    Read the article

  • Register the &quot;OneCode &amp; OneScript&quot; session at MVP Global Summit November 2013

    - by Jialiang
    Originally posted on: http://geekswithblogs.net/Jialiang/archive/2013/11/04/register-the-quotonecode-amp-onescriptquot-session-at-mvp-global-summit.aspxThe yearly Microsoft MVP Global Summit will lift its curtain on Nov 17th in Bellevue, WA.  This year, we have prepared three new apps and many new samples in response to MVPs’ feedbacks last year.  If you are attending this year’s Microsoft MVP Global Summit, you will have the privilege to kiss or bite their development team   Sample Browser Windows Phone app – with 6000+ MSDN code samples which will be at your fingertips anytime and anywhere. Script Explorer for PowerShell ISE – with 8000+ script sample which will be at your fingertips when you are writing scripts in PowerShell ISE. PowerShell checkin policy for TFS – automatically checks your PowerShell script code against best practices of PowerShell. Interested?  Please open your Schedule Builder for the MVP Summit 2013, and register for the event called “OneCode & OneScript” on Nov 17th.  We look forward to seeing you and learning your feedback.

    Read the article

  • "Automation Error Unspecified Error" ... err Error

    - by Tim Dexter
    One the best error messages I have seen in a long time and I've seen some doozies!  There have been a fare few internal emails flying over the past week about issues with the template builder for MSWord not working. The issue has been found, so if you are hitting some behaviour similar to this: I have installed BI Publisher Desktop 11.1.1.6 for 32 bit. I have to load the data from XML to RTF Template. As per instruction when I click on tab Sample XML nothing happen. When I click on any other tab from BI Publisher menu, I am getting one error in pop-up menu “Automation Error Unspecified Error. I am unable to open any of the tab of BI Publisher menu including help. Have no fear, it's for once, not a BIP issue but a Microsoft one! Check here for what you need to do to resolve the error.

    Read the article

  • Live Webcast, Dec. 6: Enterprise Clouds with Oracle VM

    - by Monica Kumar
    Mark your calendar! On Tuesday, Dec. 6th at 9am PT, we are hosting a live webcast with Oracle VM experts. Enterprise Clouds with Oracle VM Tuesday, Dec. 6 at 9 AM US PT The ability to create a cloud leveraging public or private infrastructure has been hampered by the lack of availability of practical, cost-effective choices for server virtualization. In this session, you will learn how Oracle provides a single virtualization solution for your entire infrastructure, and how Oracle Enterprise Manager and Oracle Virtual Assembly Builder help you manage Oracle Applications across the cloud. Also find out how virtualization was leveraged to transform IT for Oracle University and support more than 350,00 students in more than 40,000 classes each year. Those lessons have paved the path to private cloud computing inside Oracle. Speakers: Adam Hawley, Senior Director of Product Management, Oracle Dan Herrup, Principal Systems Engineer, Oracle Corporate Citizenship Register Now.

    Read the article

  • JMonkeyEngine display a spatial in a Nifty GUI interface

    - by Yanick Rochon
    I want to display a spatial (or the rendering of a spatial/scene) in my HUD interface. I'm really not sure how to go with this. I have search the documentation, but all the queries I search yields no result, and all I could find about images is that one can specify one with the setBackgroundImage method in the builder and setImage from the ImageRenderer class. The latter takes a String or a NiftyImage, but I'm not sure how to create one without loading an image file. Any help to understand this (if even possible) is appreciated. Thanks!

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • OpenWorld Suggest-a-Session Voting on Oracle Mix now OPEN!

    - by keith.laker
    Last year the Oracle OpenWorld team decided to use Oracle Mix as a way to select some of the papers for OpenWorld and this year we are following the same process. The majority of papers for this year's conference have already been selected, however, there are some presentation slots still available so the OpenWorld team are giving you the chance to vote on which papers you want to see at this year's OpenWorld conference.The voting process has just opened and will close on June 20. I did a quick search on the list of sessions one paper really caught my eye: Case Study: Real-Time data warehousing and fraud detection with Oracle 11gR2 by Dr. Holger Friedrich. As a data warehouse product manager I would love to see this paper selected. I have attended a number of presentations over the years given by Holger and he is an excellent, knowledgeable and entertaining presenter. The subject area is, for me, very interesting as it covers topics that I know are important to our customers and this case study highlights the innovative use of key database features. I would strongly encourage everyone to please vote for this paper. You can vote for Holger's presentation by going here:https://mix.oracle.com/oow10/proposals/10566-case-study-real-time-data-warehousing-and-fraud-detection-with-oracle-11gr2There are some rules relating to the voting process and these are all explained here: https://mix.oracle.com/oow10/faqA Quick Overview of the voting rules?1) You have to a member of the Oracle Mix communityBut membership is free! To sign up for a Mix account and you are one your way. You can sign-up by clicking on the "Create an Account" link in the top right corner of the Oracle Mix home page: https://mix.oracle.com/2) You have to vote for 3 different papersBased on last year’s voting pattern, the Mix team found that a number of participants were only voting for their own sessions. This year voters are required to vote on at least three sessions. How do I find the list of presentations? The full list of all available presentations is here: https://mix.oracle.com/oow10/proposalsGood luck and happy voting. Look forward to seeing you all at OpenWorld.

    Read the article

  • Which simple Java JPA ORM tool to use ?

    - by Guillaume
    What Java ORM library implementing JPA that match following criteria would you recommend and why ? free & open source alive (at least bug fixes and a mailing list) with good documentation simple (simpler than hibernate) I need to select a simple ORM tool that can be set up in minutes, without too much configuration and easy to understand, for setting simple CRUD DAOs. A query builder will be an interesting plus. Later I can have to move to Hibernate, that's why being JPA compliant is a must. I have found some candidates on the web, but not so much feedback, so I will gladly take your advices on the topic. ----- EDIT --------- I have been successfully testing ebean/avaje with a small test cases. Any one has a feedback on using these tools in production ?

    Read the article

  • Brand New Oracle WebLogic 12c Online Launch Event, December 1, 10am PT

    - by B Shashikumar
    The brand new WebLogic 12c will be launched on December 1st with a 2-hour global webcast highlighting salient capabilities and benefits and featuring Hasan Rizvi, SVP, Oracle Fusion Middleware and Java. For the more techie types, the 2nd hour will be a developer focused discussion including multiple demos and live Q&A. Please join us, with your fellow IT managers, architects, and developers, to hear how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder   

    Read the article

  • MODX based site has been compromised, and tagged by Google as malware

    - by JAG2007
    I'm the webmaster (inherited the site from the developer) for a site called kenbrook.org. The site is currently being tagged as malware infected by Google, and gives the following details: http://www.google.com/safebrowsing/diagnostic?site=kenbrook.org Sadly, this is the second time it has occurred. I posted the issue when it happened last year originally on Stackoverflow on this post, shortly after I inherited the site. At the time the fix was a simple removal of a few lines of code from a .js file, but I never did discover or resolve the vulnerability. The site is built on MODX, which neither I, nor the original builder, have any familiarity with. I've tried to check for security updates from MODX, but updating that software has been a real pain also. Sooo...what's my next step to getting this whole issue resolved? Or steps?

    Read the article

  • Designing complex query builders in java/jpa/hibernate

    - by Ramraj Edagutti
    I need to build complex sql queries programatically, based on large filter conditions. For example, below are few sample/hypothitical filter conditions, based on which i need to fetch users Country: india States: Andhra Pradesh(AP), Gujarat(GUJ), karnataka(KTK) Districts: All districts in AP except 3 district, 5 any districts from GUJ, all district from KTK except 1 district Cities: All cities in AP, all cities except few, include only 50 specific cities from KTK Villages: similar conditions like above with varies combinations... Currently, we have a query builder, which is very complex in nature, and not easy to modify/re-factory for improvements. So, thinking of complete re-design of it. Any suggesations on how to build this kind of complex query builders programmatically using some best practices/deisgn patterns?

    Read the article

  • NetBeans 6.9 Released

    - by Duncan Mills
    Great news, the first NetBeans release that has been conducted fully under the stewardship of Oracle has now been released. NetBeans IDE 6.9 introduces the JavaFX Composer, a visual layout tool for building JavaFX GUI applications, similar to the Swing GUI builder for Java SE applications. With the JavaFX Composer, developers can quickly build, visually edit, and debug Rich Internet Applications (RIA) and bind components to various data sources, including Web services. The NetBeans 6.9 release also features OSGi interoperability for NetBeans Platform applications and support for developing OSGi bundles with Maven. With support for OSGi and Swing standards, the NetBeans Platform now supports the standard UI toolkit and the standard module system, providing a unique combination of standards for modular, rich-client development. Additional noteworthy features in this release include support for JavaFX SDK 1.3, PHP Zend framework, and Ruby on Rails 3.0; as well as improvements to the Java Editor, Java Debugger, issue tracking, and more. Head over to NetBeans.org for more details and of course downloads!

    Read the article

  • An error has occurred when creating debian packaging

    - by Clepto
    i execute quickly share and i get Launchpad connection is ok ........ Command returned some WARNINGS: ---------------------------------- WARNING: the following files are not recognized by DistUtilsExtra.auto: mangar/.bzr/README mangar/.bzr/branch-format mangar/.bzr/branch/branch.conf mangar/.bzr/branch/format mangar/.bzr/branch/last-revision mangar/.bzr/branch/tags mangar/.bzr/checkout/conflicts mangar/.bzr/checkout/dirstate mangar/.bzr/checkout/format mangar/.bzr/checkout/views mangar/.bzr/repository/format mangar/.bzr/repository/pack-names ---------------------------------- An error has occurred when creating debian packaging ERROR: can't create or update ubuntu package ERROR: share command failed Aborting the previous time i run the command everything worked! the previous time i was using ubuntu but now i am using linux mint 13... i get the same error with quickly package! i need to package my app for the contest.. edit: now i get this too ---------------------------------- ERROR: Python module helpers not found ERROR: Python module Window not found ERROR: Python module mangarconfig not found ERROR: Python module Builder not found those files exist in the package_lib folder, why it cannot find them?

    Read the article

  • How can you Add Value to your Mobile Apps?

    - by Carlos Chang
    Author: Craig Mikus, Sr. Director, Enterprise Mobile Solutions Seems like every customer is either building or planning to build mobile apps, especially customer facing apps. Why? Inevitably, all companies want to improve the customer experience through more quality interactions that drive customer satisfaction, customer loyalty, new revenue streams, and even improve the way they service their customers. What better way than mobile apps? Right? But how can customers add more value to these mobile apps to drive more business benefit? Look closely, the answer just might be right in front of you. Still need another clue? What’s the first 4 letters of mobile – mo-bi? Or pronounced differently, More BI. That’s right – add more business intelligence to your overall mobile strategy. In today’s customer centric world where customer interactions and personalization are critical, it’s important to leverage a BI strategy that complements and feeds into your mobile strategy. For example, I was recently talking to a customer that was implementing a data warehouse project focused customer analytics. Their goal was to understand who are their best customers and why, develop customer profiles, identify customer trends & patterns, identify cross sell opportunities, and much more. The company then wanted to feed this information to marketing for targeted campaigns and programs. As we continued to talk, I asked my contact if they had plans to feed this information into their customer facing mobile apps to personalize the apps, target their interactions, and hopefully drive customer loyalty and new revenue streams? Two minutes later, my contact was calling his mobile development teams. So my advice to everyone, as you establish your enterprise mobile strategy and goals, remember that “mo-BI” is a critical component to add value to your mobile apps! So make sure you have “mo BI” in your mobile strategy. As I come to think of it, did you ever notice that Big Data also starts with BI?

    Read the article

  • Oracle Weblogic 12c Launch

    - by Robert Baumgartner
    Am 1. Dezember 2011 wird Oracle WebLogic Server 12c weltweit vorgestellt. Um 19:00 findet ein Execuite Overview mit Hasan Rizvi, Senior Vice President, Product Development, statt. Um 20:00 findet ein Developer Deep-Dive mit Will Lyons, Director, Oracle WebLogic Server Product Management, statt. The new release of Oracle WebLogic Server is: • Designed to help customers seamlessly move into the public or private cloud with an open, standards-based platform • Built to drive higher value for customers’ current infrastructure and significantly reduce development time and cost • Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Hier geht es zur Anmeldung: Anmeldung

    Read the article

  • Best sites to find good .NET Developers

    - by Mag20
    I am looking for good sites to post a position for a .NET developer. I already tried: Craig's list got about 10 resumes, but most couldn't answer our technical questions StackOverflow Careers no responses What sites did you have success with finding good developers? UPDATE 1: Wanted to provide some more information: My company is in NJ. We are a small startup. Less then 10 people. Monster, Dice, CareerBuilder all charge like $500 a month per posting. Seems a bit much. Also only Dice is specifically targeting technical positions. With monster and career builder I am a bit worried about having to go through hundreds of resumes that don't apply.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >