Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 217/679 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Orlando .NET Code Camp 2012 - total success..

    - by mrad
    Their site is www.orlandocodecamp.comThis year's camp was held at Seminole State College.It was well worth it.. Took a chance at going.. by getting up at 5am and driving from Jax to Sanford for 2+hr..Run into some old friends and bunch of new ones. Coders are not really good at networking.. but they sure did show up.. attendance was solid 500+ geeks and some sessions were standing room only. MVP John Papa had the room packed out on his every session. Really enjoyed great and inspiring WP7 presentations by MVP Atley Hunter from Canada.. And of course the MVP legend Joe Healy was everywhere encouraging and promoting cool stuff, hopefully we'll get him back to present at JaxDUG and/or bring back Microsoft workshops to Jax area.

    Read the article

  • Java JRE 1.7.0_45 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Java Runtime Environment 7u45 (a.k.a. JRE 7u45-b18) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 11i and 12.0, 12.1, and 12.2 for Windows-based desktop clients. Effects of new support dates on Java upgrades for EBS environments Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details: What does this mean for Oracle E-Business Suite users? Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. What's needed to enable EBS environments for JRE 7? EBS customers should ensure that they are running JRE 7u17, at minimum, on Windows desktop clients. Of the compatibility issues identified with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue -- and all other JRE 7 compatibility issues -- by ensuring that they have applied the latest certified patches documented for JRE 7 configurations to their EBS application tier servers.  These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied.  Where are the official patch requirements documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 are documented in these Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12.0, 12.1, 12.2 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) EBS + Discoverer 11g Users JRE 1.7.0_45 is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements: Discoverer (11g) 11.1.1.6 plus Patch 13877486 and later  Reference: How To Find Oracle BI Discoverer 10g and 11g Certification Information (Document 233047.1) Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases on the JRE 6 and 7 codelines.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22 on the JRE 6 codeline, and JRE 7u10 and later JRE 7 codeline updates. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017. In other words, nothing changes for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017. How can EBS customers obtain Java 6 updates after the public end-of-life? EBS customers can download Java 6 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see: All Java SE Downloads on MOS (Note 1439822.1) Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? This upgrade is highly recommended but remains optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Java 6 is covered by Extended Support until June 2017.  All E-Business Suite customers must upgrade to JRE 7 by June 2017. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What do Mac users need? Mac users running Mac OS 10.7 or 10.8 can run JRE 7 plug-ins.  See this article: EBS 12 certified with Mac OS X 10.7 and 10.8 with Safari 6 and JRE 7 Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? JRE is used for desktop clients.  JDK is used for application tier servers JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers.  Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms. JDK 7 is certified with E-Business Suite 12.  See: Java (JDK) 7 Certified for E-Business Suite 12 Servers References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • A Case for Oracle Fusion Middleware by Lucas Jellema

    - by JuergenKress
    An in-depth look at the interaction of people, processes, and technologies in the transition to a service-oriented architecture. Author's Note This article presents a profile of a fictitious organization, NOPERU. The story of NOPERU as told in this article is actually a collage of the events at some dozen organizations that I have been involved with over the past few years. None of these organizations sport all the characteristics of NOPERU - but all of them have gone through or are going through a similar transition as described here and all aspects of this article were taken from real life at one or usually many of these organizations. Background NOPERU (National Organization for Permits for Emissions and Resource Usage) is a public organization that continues to transform in terms of its business, organization and technology. Changing business requirements; new interaction channels; and increasing demands for more flexibility, faster throughput and lower costs drive these transformations, while technological evolution and new architecture patterns enable the change. NOPERU chose Oracle Fusion Middleware as the technology platform to implement the new architecture and required applications. This article takes a close look at NOPERU's journey from its origins in the early 1990s as a largely paper-based entity with regional databases and client-server Oracle Forms applications. Its upcoming business objectives are introduced: what is required of the organization and what the higher goals behind these requirements are. The architecture roadmap is described at a high level as well as drilled down to a service oriented design. Based on the architecture roadmap and the business requirements and NOPERU went through a technology selection to determine the technology stack with which the future would be realized in terms of IT. The article discusses that selection and details the projects subsequently planned (and executed to date). The new architecture and technology as well as the introduction of an Agile development method have had substantial consequences for the IT organization, the processes and individual staff members. The approach NOPERU has adopted with regard to the people and the organization is portrayed. Finally, the article discusses many conclusions that NOPERU has drawn that may benefit itself and other organizations. Introducing NOPERU NOPERU is a national organization charged with issuing permits for excessive emissions (i.e., carbon dioxide) and disproportionate usage of such resources as energy or water. Anyone-whether a commercial enterprise, government agency or private person--who emits or consumes more than what is considered "fair usage" requires such a permit. When someone builds an outdoor heated swimming pool, for example, or open-air terrace heating, such a permit needs to be obtained. When a company installs new, energy-intensive equipment, such as water boilers or deep freezers, it too needs to get a NOPERU permit. Government-sponsored projects at every level that involve consumption of large quantities of fresh water or production of high volumes of emissions must turn to NOPERU for a permit. Without the required license, any interested party can get a court to immediately put a stop to the disputed activity. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Lucas Jellema,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • MIA

    - by Robert May
    So, I’ve been missing in action on this blog for quite some time.  I need to rectify that. Part of the reason I’ve been absent is because I haven’t be able to talk about what I’m working on.  A former client watches my blog rather closely, and although we accomplished many good things together, their culture is such that they really don’t like people to freely express their thoughts (you’ll note my blog posts stopped rather abruptly).  I learned some really important lessons about Agile in the last 3 years, and I think its worthwhile to talk about them.  Sometimes things worked really well, sometimes, they failed failed.  Sometimes that failure was me, sometimes it wasn’t. I understand Agile better now, and hopefully, what I have to say will guide others through this process and help others understand Agile better. One thing that I’ve learned is that MANY companies that say they are doing Agile are NOT really doing Agile.  To often, they pick the things they like and don’t follow the process long enough to know what rules they can break, and which ones they shouldn’t.  This is probably the primary reason why Agile fails. So, expect more posts, especially as I’m flying coast to coast. :)

    Read the article

  • Best Practices to Accelerate Oracle VM Server Deployments

    - by Honglin Su
    IOUG (Independent Oracle User Group) Virtualization SIG is hosting the webcast on the best practices of Oracle VM server virtualization. July 11, 2012 - Best Practices to Accelerate Oracle VM Server on SPARC Deployments. Register here. To learn the best practices on Oracle VM Server for x86,  watch the session replay here. For more white paper about best practices, visit Oracle VM OTN page here.

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Java Embedded @ JavaOne

    - by sasa
    2012?9?30??10?4?????????????????JavaOne????????????????????????????????Java?????????????????????????????JavaOne???10?3??4??????????????????????????·????????Java Embedded @ JavaOne??????????? JavaOne???????????JavaOne Embedded @ JavaOne??????????????????????????????????????Java SE Embedded?????????????????????????Java??????????????????????????? ????????????Java Embedded @ JavaOne??????9?7???$595?9?28???$795??????$995????????????????????JavaOne??????????????????100??????????

    Read the article

  • Will You Accept This Rose?

    - by user715249
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Ashley, Bentley and the Masked Man. If these names mean anything to you we know where you’ll be on Monday night – planted in front of your television awaiting the villain’s return and what is sure to be the most dramatic rose ceremony yet on the Bachelorette.  If you’re the Oracle PartnerNetwork Communications Team you’ll be spending your Monday night putting the final touches on the most exciting Partner Kickoff Event yet.  Listen in as Judson tells you more. Starting at 6:00 AM PT on Tuesday, June 29th partners – and potential partners – can tune in to watch the excitement unfold at partner.oracle.com.  The storyline for FY12 will continue to unfold with a special role being outlined for our ISV partners.  SPOILER ALERT: OPN has made an investment in how we’ll go to market together – trust us - you don’t want to get this news from the highlight reel. While we won’t be sending anyone home from the show, we do promise an exciting hour which will gear you up to go to market with Oracle in the new fiscal year.  The Oracle PartnerNetwork FY12 Kickoff is being held live 5 times and will include a ‘date card’ message for each region. EMEA Kickoff - Tuesday, June 29, at 6 a.m. PT / 2 p.m. BT LAD Kickoff – Tuesday, June 29, at 8 a.m. PT / noon DT North America Kickoff – Tuesday, June 29, at 10 a.m. PT / 1 p.m. ET Japan Kickoff – Tuesday, June 29, at 6 p.m. PT / Wednesday, June 30, at 10 a.m. JT (Tokyo) APAC Kickoff– Tuesday, June 29, at 8 p.m. PT / Wednesday, June 30, at 11 a.m. SGT (Singapore) / 1 p.m. AET (Sydney) We’ll be taking your questions live throughout the show – we hope you’ll “accept our rose” and join us on this amazing journey. The OPN Communications Team

    Read the article

  • Oracle apresenta resultados do ano

    - by pfolgado
    A Oracle acabou de apresentar os resultados do 4º trimestre e do ano fiscal FY11. Os resultados mais relevantes são: Receitas de Vendas cresceram 33%, atingindo um total de 35,6 mil milhões de dólares Vendas de Novas licenças cresceram 23% Receitas de Hardware de 4,4 mil milhões de dólares Resultados operacionais cresceram 39% Resultados por acção de cresceram 38% para 1,67 dólares “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” Oracle Reports Q4 GAAP EPS Up 34% To 62 Cents; Q4 NON-GAAP EPS Up 25% To 75 Cents Q4 Software New License Sales Up 19%, Q4 Total Revenue Up 13% Oracle today announced fiscal 2011 Q4 GAAP total revenues were up 13% to $10.8 billion, while non-GAAP total revenues were up 12% to $10.8 billion. Both GAAP and non-GAAP new software license revenues were up 19% to $3.7 billion. Both GAAP and non-GAAP software license updates and product support revenues were up 15% to $4.0 billion. Both GAAP and non-GAAP hardware systems products revenues were down 6% to $1.2 billion. GAAP operating income was up 32% to $4.4 billion, and GAAP operating margin was 40%. Non-GAAP operating income was up 19% to $5.2 billion, and non-GAAP operating margin was 48%. GAAP net income was up 36% to $3.2 billion, while non-GAAP net income was up 27% to $3.9 billion. GAAP earnings per share were $0.62, up 34% compared to last year while non-GAAP earnings per share were up 25% to $0.75. GAAP operating cash flow on a trailing twelve-month basis was $11.2 billion. For fiscal year 2011, GAAP total revenues were up 33% to $35.6 billion, while non-GAAP total revenues were up 33% to $35.9 billion. Both GAAP and non-GAAP new software license revenues were up 23% to $9.2 billion. GAAP software license updates and product support revenues were up 13% to $14.8 billion, while non-GAAP software license updates and product support revenues were up 13% to $14.9 billion. Both GAAP and non-GAAP hardware systems products revenues were $4.4 billion. GAAP operating income was up 33% to $12.0 billion, and GAAP operating margin was 34%. Non-GAAP operating income was up 27% to $15.9 billion, and non-GAAP operating margin was 44%. GAAP net income was up 39% to $8.5 billion, while non-GAAP net income was up 34% to $11.4 billion. GAAP earnings per share were $1.67, up 38% compared to last year while non-GAAP earnings per share were up 33% to $2.22. “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” In addition, Oracle also announced that its Board of Directors declared a quarterly cash dividend of $0.06 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 13, 2011, with a payment date of August 3, 2011.

    Read the article

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • Great Example of Community How-To Doc

    - by ultan o'broin
    Always on the lookout for examples of community doc, and here's a great one: Chet Justice (@oraclenerd) just launched an eBook version (PDF actually) of John Piwowar's (@jpiwowar) very popular multi-part E-Business Suite Installation Guide. You can obtain it using the PayPal buttons here. All in a good cause too. Creation of how-to information like this for functional or technical tasks, along with working examples about post-install steps, configurations and customizations, is what an applications community value-add is all about. Each community is different of course, an Adobe PhotoShop community might be more interested in templates. Great to see the needs of the community being met like this. If you have other examples you'd like to share, then find the comments.

    Read the article

  • Bridging The Gap Between Developers And Testers With VS 2010

    - by Vincent Grondin
    On January 29th Etienne Tremblay and I presented infront of roughly 120 people in Ottawa a 7 hours "sketch" on how VS 2010 and TFS 2010 can help both devs and testers in their respective work.  The presentation focused on how a testers' work can positively influence a developers' work and vice versa.  The format was quite unusual as I said it's a "sketch" where Etienne and I "ignore" the audience and we do as if we were at work and the audience is sort of "spying" on us.  In all I'm quite pleased with the content we presented and the format sure was alot of fun to render and I think the audience liked it too...  The good news for you people reading this post is that it got RECORDED and it's now available for download in quick 25 to 35 minutes format on the dev teach web site:  http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx   There where 2 cameras, one filming us and one capturing the screen for our demos.  We switch from one to another in an intersting flow and Jean-René Roy made sure he kept all our goofs and didn't edit those funny "oups moments" where we screw-up in the scenario...  Mostly educative but hilarious at times !!! I encourage you all to download and watch the 13 episodes...  Follow a day at work for a tester and a developper using VS 2010 and TFS 2010 to improve their chemistry !  Thanks to Jean-René Roy for all the work he's put into this event and to Microsoft and Pyxis for sponsoring the event.

    Read the article

  • Bridging the gap between developers and testers with VS 2010

    - by Etienne Tremblay
    Hey everyone, I know it’s been an eternity since I blogged but I have so much to do that I unfortunately need to prioritize.  Vincent Grondin and I did a 7h presentation on the new developer and tester tools available in the VS 2010 suite.  It was a blast.  We did it in front of an audience (around 120) and it was taped.  We did it as a play and really didn’t look at the crowd at all we were training each other on the technology. It is now available for anyone that would like to watch it at this location: http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx What we covered in the full day event was Migration to TFS 2010 (10h00) 1-Migration of VSS to TFS (20 min.) 2-Automating the Build (Something you can't do with VSS) ( 20 Min.) 3-User story (Real application context for this presentation) (20 min.) 10h00 Pause Manuel Tests by Dev ( 11h30) 4-Adding a tester to the team (Into to MTM) (20 min.) 5-Define tests (what is a white bug) (20 min.) 6-Fix the bug and show Intellitrace and Play back the test (20 min.) 12h15 Lunch Manuel testing for maintenance (13h30) 7- Implement new Feature (web service) and Identify bug with MTM and branch for a production fix and also add a new Build script (20 min.) 8- Fix bug in production branch, Playback tests, merge the change in main branch (20 min.) Manuel testing with the lab manager (14h30) 9- Intro to Lab manager and environment (20 min.) 10- Change build script to deploy to lab and test with web service in lab environment. (20 min.) 15h15 Pause Automate UI test with CodeUI (15h30) 11- Reducing the effort of testing the UI (20 min.) 12- Repeating testing to make sure the application is working properly (20 min.) 13- Automate Coded UI with the Lab environment (20 min.) 16h30 Conclusions As you can see lots of stuff!! Enjoy the show and let us know how you like it Cheers, ET Technorati Tags: VS 2010,Testing Tools,ALM,Training

    Read the article

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • JSF 2.2 recent progress - Early Draft

    - by alexismp
    JSF specification lead Ed Burns has an update on the progress of JSF 2.2, another component which should be required as part of the upcoming Java EE 7 standard. This includes a reminder of the scope of this specification, the availability of the early draft and height specific features that are being worked on and split into "Mostly Specified Features" and "Not Yet Fully Specified Features" (I think you can read the latter as "at risk"). My favorite is "763-EverythingIsInjectable". Remember that JSF 2.2 is due out in the middle of 2012 which is in time to be integrated in the Java EE 7 platform JSR (currently scheduled for second half of 2012). In the mean time, JSF 2.2 nightly builds are available.

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

  • Summary of TaleoWorld 2012

    - by Scott Ewart
    Taleo World resulted in lively, positive conversation on social media, with 1,595 references on Twitter. Conversation was driven by users live-tweeting about the keynotes, product sessions/demos and customers. The Wednesday morning keynote resulted in a spike, and users responded positively to executive’s view on HCM and the innovative Oracle-Taleo product roadmap.   This is a recap of the Twitter feed conversations highlighting top tweets and photos, as well as supporting materials, including the resulting coverage.  Please read The Taleo World Storify and click here. Five unique articles appeared on Taleo World. The Ventana Research blog and InformationWeek wrote in-depth articles focusing on Mark Hurd’s presentation, product strategy, and demonstrations of Oracle Taleo Cloud Service Feature Pack 12B and Oracle Fusion Tap, overall stressing Oracle’s commitment to customers and product development. To view the full-text of all articles , please click below on the articles' name. Oracle Presents a Taleo Future for Human Capital Management - Ventana Research blog Innovation as a Choice - Steve Boese's HR Technology blog Oracle Touts Taleo As HCM Heats Up - InformationWeek With 43% of the Current Workforce Retiring In 10 Years, What’s A CEO To Do? - HireVue Digital Distortion blog  What’s Your Recruitment Metrics Story? - SmashFly Recruitment Marketing Technology blog.

    Read the article

  • Access Log Files

    - by Matt Watson
    Some of the simplest things in life make all the difference. For a software developer who is trying to solve an application problem, being able to access log files, windows event viewer, and other details is priceless. But ironically enough, most developers aren't even given access to them. Developers have to escalate the issue to their manager or a system admin to retrieve the needed information. Some companies create workarounds to solve the problem or use third party solutions.Home grown solution to access log filesSome companies roll their own solution to try and solve the problem. These solutions can be great but are not always real time, and don't account for the windows event viewer, config files, server health, and other information that is needed to fix bugs.VPN or FTP access to log file foldersCreate programs to collect log files and move them to a centralized serverModify code to write log files to a centralized placeExpensive solution to access log filesSome companies buy expensive solutions like Splunk or other log management tools. But in a lot of cases that is overkill when all the developers need is the ability to just look at log files, not do analytics on them.There has to be a better solution to access log filesStackify recently came up with a perfect solution to the problem. Their software gives developers remote visibility to all the production servers without allowing them to remote desktop in to the machines. They can get real time access to log files, windows event viewer, config files, and other things that developers need. This allows the entire development team to be more involved in the process of solving application defects.Check out their product to learn morehttp://www.Stackify.com

    Read the article

  • Email Alias [email protected] Replaced with New Oracle Certification Support Tool

    - by Paul Sorensen
    All Oracle Certification customer service issues previously sent to [email protected], [email protected], [email protected], or [email protected], should now be submitted as service requests via the new request tool. Support via these email aliases ends today. Managing candidate communications via this tool will enable better issue tracking capabilities and ensure that all issues are handled quickly and efficiently. The integrated tool will also help us to more easily research historical and related issues to enable improved certification communications and business processes. For now, questions related to Java, Oracle Solaris (Cluster), MySQL, NetBeans or OpenOffice.org exam or certification, will still be sent to [email protected] and resolved via email. Questions related to the status of an Oracle Certification Success Kit, will still be sent to [email protected] and resolved via email. ?We are excited about this new offering and ?c?o?n?t?i?n?u?e? ??t?o??????? ?w?o?r?k? ?t?o?w?a?r?d ?improve?d customer ?s?e?r?v?i?c?e?? for our OCP community. Thank you for your cooperation! Quick View of Oracle Certification Customer Support Oracle Certification Support: All issues that previously would have been sent to [email protected] [email protected]: All questions on Java, Oracle Solaris (Cluster), MySQL, NetBeans, OpenOffice.org exams and certifications [email protected]: All questions on the status of your Oracle Certification Success Kit

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >