Search Results

Search found 2040 results on 82 pages for 'platforms'.

Page 61/82 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Silverlight Cream for April 29, 2010 -- #851

    - by Dave Campbell
    In this Issue: Carlos Figueira(-2-), Subodh Pushpak, Gergely Orosz, John Papa, Mike Snow(-2-), Rishi, Tim Heuer, Stefan Olson, and David Anson. Shoutouts: Josh Holmes blogged about a cool app the City of Miami has up: Miami 311: Built on Windows Azure Gergely Orosz reports on the state of a bug he found pre SL4: Silverlight 4 still displays large elements incorrectly Laura Foy and Charlie Kindel discuss WP7 on Channel 9: Windows Phone 7 Developer Tools Refresh Announced Charlie Kindel has an announcement, good instructions, and what's new notes on the Windows Phone Developer Tools CTP Refresh! Tim Heuer mentioned the workaround for this in his post (below), but I thought you might like to read Brandon Watson's debrief of what it's all about: Signed Assemblies Bug in the Windows Phone Tools CTP Refresh Laurent Bugnion posted about interrelations between versions of Blend and WP7 code... read it closely: Be careful when installing the Blend Windows Phone 7 Add-In From SilverlightCream.com: Consuming REST/POX services in Silverlight 4 Carlos Figueira has a pair of posts up about consuming services in Silverlight 4. This first one is about consuming REST/POX services. He provides a Service Contract that can be used with either and the full project code is available as well. Consuming REST/JSON services in Silverlight 4 In the second post, Carlos Figueira provides the code to allow WCF and Silverlight 4 to consume strongly-typed REST/JSON... and again, all the code is available. Silverlight and WCF caching Subodh Pushpak has a post up discussing caching in WCF, and has code demonstrating turning caching on at run-time. Detecting Silverlight Version Installed Gergely Orosz said it right when he said "Detecting the Silverlight version installed on a client machine isn’t entirely straightforward." ... and after reading this post, if you take the link to his ScottLogic blog, you'll get a full break-out of how it's done. Silverlight TV 22: Tim Heuer on Extending the SMF It's Thursday, and that means Silverlight TV! ... this week, John Papa has on Tim Heuer who has always been out there pushing media... and he's talking about SMF or Silverlight Media Framework for the uninitiated, and also extending it. Silverlight Tip of the Day #7 – Localized Resources Mike Snow has Tip Number 7 up and it's about localization... good end-to-end discussion and demonstration. Just thought I should use that to prove to my daughter that the tatoo she had put on the back of her neck actually reads "Eat More Broccoli" :) Silverlight Tip of the Day #8 – Detecting Alt, Shift, Control, Window & Apple Keys Combinations I just realized Mike Snow's site logo reads "Silverlight Tips of the Day" (bolding mine) ... that answers why I'm seeing more than one -- sorry Mike, couldn't pass it up :) ... Mike's second tip today and number 8 in the series is on detecting all the mouse button and ctl/alt/shift combinations in Silverlight. nRoute: More Wholesomeness, with SL 4 and .NET 4.0 Rishi has a post up announcing a new nRoute release for Silverlight 4 and .NET 4.0 He's tweaked the code to take advantages of enhancements in the new platforms, so check it out. Windows Phone 7 Developer Tools April 2010 Refresh Booya... Tim Heuer announced the release of the next drop in the WP7 tools ... dang wish I was at home today :) ... be sure to read the post for info such as the notes about Authenticode Assemblies and the release notes. Updates to Silverlight Multi-binding support Stefan Olson points up a SL4 change to Multi-binding support that he had previously blogged about. He shows the previous non-working example, and what you have to do to make it work now. Using XAML to create a custom wallpaper image for your mobile device David Anson has a solution for those pesky lost devices, and let me go on the record right now saying if anyone finds a WP7 phone laying around, just call me, it's mine :) [think that'd work??] ... ok, David's solution is a WPF app "MobileDeviceHomeScreenMaker" that you get the info set and it produces a png you then put on your device. But seriously about that lost phone... Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • How to Sync Any Browser’s Bookmarks With Your iPad or iPhone

    - by Chris Hoffman
    Apple makes it easy to synchronize bookmarks between the Safari browser on a Mac and the Safari browser on iOS, but you don’t have to use Safari — or a Mac — to sync your bookmarks back and forth. You can do this with any browser. Whether you’re using Chrome, Firefox, or even Internet Explorer, there’s a way to sync your browser bookmarks so you can access your same bookmarks on your iPad. Safari on a Mac Apple’s iCloud service is the officially supported way to sync data with your iPad or iPhone. It’s included on Macs, but Apple also offers similar iCloud bookmark syncing features for Windows. On a Mac, this should be enabled by default. To check whether it’s enabled, you can launch the System Preferences panel on your Mac, open the iCloud preferences panel, and ensure the Safari option is checked. If you’re using Safari on Windows — well, you shouldn’t be. Apple is no longer updating Safari for Windows. iCloud allows you to synchronize bookmarks between other browsers on your Windows system and Safari on your iOS device, so Safari isn’t necessary. Internet Explorer, Firefox, or Chrome via iCloud To get started, download Apple’s iCloud Control Panel application for Windows and install it. Launch the iCloud Control Panel and log in with the same iCloud account (Apple ID) you use on your iPad or iPhone. You’ll be able to enable Bookmark syncing with Internet Explorer, Firefox, or Chrome. Click the Options button to select the browser you want to synchronize bookmarks with. (Note that bookmarks are called “favorites” in Internet Explorer.) You’ll be able to access your synced bookmarks in the Safari browser on your iPad or iPhone, and they’ll sync back and forth automatically over the Internet. Google Chrome Sync Google Chrome also has its own built-in sync feature and Google provides an official Chrome app for iPad and iPhone. If you’re a Chrome user, you can set up Chrome Sync on your desktop version of Chrome — you should already have this enabled if you have logged into your Chrome browser. You can check if this Chrome Sync is enabled by opening Chrome’s settings screen and seeing whether you’re signed in. Click the Advanced sync settings button and ensure bookmark syncing is enabled. Once you have Chrome Sync set up, you can install the Chrome app from the App Store and sign in with the same Google account. Your bookmarks, as well as other data like your open browser tabs, will automatically sync. This can be a better solution because the Chrome browser is available for so many platforms and you gain the ability to synchronize other browser data, such as your open browser tabs, between your devices. Unfortunately, the Chrome browser is slower than Apple’s own Safari browser on iPad and iPhone because of the way Apple limits third-party browsers, so using it involves a trade-off. Manual Bookmark Sync in iTunes iTunes also allows you to sync bookmarks between your computer and your iPad or iPhone. It does this the old-fashioned way, by initiating a manual sync when your device is plugged in via USB. To access this option, connect your device to your computer, select the device in iTunes, and click the Info tab. This is the more outdated way of synchronizing your bookmarks. This feature may be useful if you want to create a one-time copy of your bookmarks from your PC, but it’s nowhere near ideal for regular syncing. You don’t have to use this feature, just as you really don’t have to use iTunes anymore. In fact, this option is unavailable if you’ve set up iCloud syncing in iTunes. After you set up bookmark syncing via iCloud or Chrome Sync, bookmarks will sync immediately after you save, remove, or edit them.     

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Michelle Kimihira
    By Maria Forney Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Maria Forney
    Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • A Letter for Your CEO About Social Marketing’s Future

    - by Mike Stiles
    We’ll leave it to you to decide if or how to sneak this in front of them. Dear Chief: This social marketing thing looks serious. It’s gone beyond having a Facebook page and putting our info and a few promotions on it. It’s seriously disrupting how we’ve always done marketing. And its implications reach well beyond marketing. My concern is that we stay positioned ahead of these changes and are prepared to embrace, adapt and capitalize on these new capabilities as opposed to spending valuable time and money trying to shoehorn social into “the way we’ve always done things.” I’m also concerned about what happens if our competition executes on this before we do. The days of being able to impose our ad messaging on the masses to great effect are numbered. The public now has the tech tools and ability to filter out things that are irrelevant to them. And frankly, spending ad dollars to reach unlikely prospects isn’t the most efficient path for us either. Today, our customers have to genuinely love what we do. That starts with a renewed, customer-centric focus on the quality and usability of our product. If their experience with it is bad, they now have very connected, loud voices that will testify against us. We can’t afford that. Next, their customer service experience, before and after the sale, has to be a pleasant surprise. That requires truly knowing our customers and listening to them. Lip service won’t cut it. We have to get and use as much data on the customer as possible, interact with them wherever they want to interact with us, and commit to impressing them. If we do, they’ll get out there and advertise for us. Since peer-to-peer recommendation is the most effective marketing, that’s money in the bank. Social marketing is about forming relationships, same as how individuals use social. We want them to know us, trust us, and get real value from knowing us. That requires honesty and transparency that before now might have been uncomfortable. I propose that if we clearly make everything we do about our customers’ wants and needs, we’ll have nothing to hide. It will solidify customer loyalty, retention, and thus, revenue. These things can’t happen without certain tools and structural changes in the organization. There are social cloud platforms that integrate social management into all of the necessary areas: CRM, customer service, sales, marketing automation, content marketing, ecommerce, etc. This is will give us a real-time, complete view of the customer so their every interaction with us is attentive, personalized, accurate, relevant, and satisfying. Without it, we’re just a collage of disjointed systems, each gathering data that informs only its own departmental silo. The customer is voluntarily giving us everything we need to know about them to win them over, but we have to start listening and putting the pieces together. There’s still time. Brands are coming to terms with this transition to the socially enabled enterprise, but so far they aren’t moving very fast. Like us, they’re dealing with long-entrenched technologies and processes. CMO’s and CIO’s have to form new partnerships. Content operations have to be initiated and properly staffed and funded. Various departments must be able to utilize interconnected big data. What will separate the winners from the losers? Well chief, that’s why I’m writing you. It’s in your hands. These initiatives won’t get the kind of priority and seriousness that inspire actual deadlines & action unless they come from your desk. You have to be the champion of customer centricity. You have to be our change agent. You have to be our innovator. Otherwise, it’s going to be business as usual, and that puts us in a very vulnerable place. Sincerely, Your Team @mikestilesPhoto: Gary Scott, stock.xchng

    Read the article

  • Silverlight Cream for June 08, 2010 -- #877

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Chris Klug, Beau, Christian Schormann(-2-), Dan Wahlin, Pete Brown, Michael S. Scherotter, Philipp Sumi, Andy Wigley, and Phil Middlemiss. Shoutouts: Mark Tucker set about learning Caliburn, and in the process is writing a Caliburn Book: Chapters 1-3 Jesse Liberty has a great link-laden post up about why we should all be learning/using Blend: Why Developers Should, Must, Do Care About The New Expression Blend be sure to read what he says about WP7 development, however! Charlie Kindel announced an Install problem with the Developer Tools CTP Refresh and the WP7 tools... check this out if you're having problems. John Papa has a good post up on the happenings yesterday: Expression Studio 4 Launch of Blend, SketchFlow, Encoder and More! Erik Mork & Company's latest "This Week in Silverlight" is titled First Drop: Prism v4 – First Drop is Available From SilverlightCream.com: Animated navigation between Pages Miroslav Miroslavov has Part 8 of his "Silverlight in Action" series up, detailing cool things from the CompleteIT site... this one is on Animated navigation between pages. Subtitling videos Chris Klug got a gig adding subtitles to videos for Microsoft (sweet) ... and no, not *that* kind of subtitles... read how he approached the final solution. Silverlight Watermark TextBox I'm not sure we can have too many Watermark TextBoxes, and neither does Beau , who sent me a link to this one... give it a dance and decide. Blend 4: Collaborative SketchFlow Feedback with SharePoint With the new Blend release, Christian Schormann has a post up describing the lashup to Sharepoint for sharing Sketchflow and getting feedback. New Utility, Links, and Tutorials for Path-Based Layout Christian Schormann also has a collection of resources for Path-Based Layouts, including a utility "that lets you apply a whole bunch of position-specific effects without having to write any code"... lots of links to resources here. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application Dan Wahlin draws on his recent experience and lays out some of the fun and pitfalls of building LOB apps in Silverlight... WCF, MVVM, slides, and code included WPF (and Silverlight): Choose your Fonts and Text Rendering Options Wisely Pete Brown has a great post up on using fonts wisely across multiple platforms... lots of info and good discussion in the comments as well. Ball Watch USA Remember the awesome watch Michael S. Scherotter did in Silverlight 1 and then converted to Updated Ball Trainmaster Cannonball Watch to Silverlight 2? Well... there's now a contest underfoot and 8 videos to help you get started... all good stuff, and good luck! ... Michael has a post up about the contest: Enter to Win a Ball Watch by Creating One in Silverlight Announcing Sketchables – Rapid Mockup Creation with SketchFlow By way of Jesse Libertyhttp://jesseliberty.com/2010/06/08/why-developers-should-must-do-care-about-the-new-expression-blend/, this is a cool production by Philipp Sumi about a simple mockup framework he's created. Perst - a database for Windows Phone 7 Silverlight I think one of my first comments to Michael Washington back at the MVP Summit 2010 was that we'd need a database engine, and too cool, but we've got one, Andy Wigley discusses Perst in this post... to save you some time, here's the Perst site A Chrome and Glass Theme - Part 7 Phil Middlemiss has part 7 of his great theme-building series up... this time he's giving the accordian control a once-over. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Why Haven’t NFC Payments Taken Off?

    - by David Dorf
    With the EMV 2015 milestone approaching rapidly, there’s been renewed interest in smartcards, those credit cards with an embedded computer chip.  Back in 1996 I was working for a vendor helping Visa introduce a stored-value smartcard to the US.  Visa Cash was debuted at the 1996 Olympics in Atlanta, and I firmly believed it was the beginning of a cashless society.  (I later worked on MasterCard’s system called Mondex, from the UK, which debuted the following year in Manhattan). But since you don’t have a Visa Cash card in your wallet, it’s obvious the project never took off.  It was convenient for consumers, faster for merchants, and more cost-effective for banks, so why did it fail?  All emerging payment systems suffer from the chicken-and-egg dilemma.  Consumers won’t carry the cards if few merchants accept them, and merchants won’t install the terminals if few consumers have cards. Today’s emerging payment providers are in a similar pickle.  There has to be enough value for all three constituents – consumers, merchants, banks – to change the status quo.  And it’s not enough to exceed the value, it’s got to be a leap in value, because people generally resist change.  ATMs and transit cards are great examples of this, and airline kiosks and self-checkout systems are to a lesser extent. Although Google Wallet and ISIS, the two leading NFC payment platforms in the US, have shown strong commitment, there’s been very little traction.  Yes, I can load my credit card number into my phone then tap to pay, but what was the incremental value over swiping my old card?  For it to be a leap in value, it has to offer more than just payment, which I can do very easily today.  The other two ingredients are thought to be loyalty programs and digital coupons, but neither Google nor ISIS really did them well. Of course a large portion of the mobile phone market doesn’t even support NFC thanks to Apple, and since it’s not in their best interest that situation is unlikely to change.  Another issue is getting access to the “secure element,” the chip inside the phone where accounts numbers can be held securely.  Telco providers and handset manufacturers own that area, and they’re not willing to share with banks.  (Host Card Emulation, which has been endorsed by MasterCard and Visa, might be a solution.) Square recently gave up on its wallet, and MCX (the group of retailers trying to create a mobile payment platform) is very slow out of the gate.  That leaves PayPal and a slew of smaller companies trying to introduce easier ways to pay. But is it really so cumbersome to carry and swipe (soon to insert) a credit card?  Aren’t there more important problems to solve in the retail customer experience?  Maybe Apple will come up with some novel way to use iBeacons and fingerprint identification to make payments, but for now I think we need to focus on upgrading to Chip-and-PIN and tightening security.  In the meantime, NFC payments will continue to struggle.

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • The New Social Developer Community: a Q&A

    - by Mike Stiles
    In our last blog, we introduced the opportunities that lie ahead for social developers as social applications reach across every aspect and function of the enterprise. Leading the upcoming JavaOne Social Developer Program October 2 at the San Francisco Hilton is Roland Smart, VP of Social Marketing at Oracle. I got to ask Roland a few of the questions an existing or budding social developer might want to know as social extends beyond interacting with friends and marketing and into the enterprise. Why is it smart for developers to specialize as social developers? What opportunities lie in the immediate future that’s making this a critical, in-demand position? Social has changed the way we interact with brands and with each other across the web. As we acclimate to a new social paradigm we also look to extend its benefits into new areas of our lives. The workplace is a logical next step, and we're starting to see social interactions more and more in this context. But unlocking the value of social interactions requires technical expertise and knowledge of developing social apps that tap into the social graph. Developers focused on integrating social experiences into enterprise applications must be familiar with popular social APIs and must understand how to build enterprise social graphs of their own. These developers are part of an emerging community of social developers and are key to socially enabling the enterprise. Facebook rebranded their Preferred Developer Consultant Group (PDC) and the Preferred Marketing Developers (PMD) to underscore the fact developers are required inside marketing organizations to unlock the full potential of their platform. While this trend is starting on the marketing side with marketing developers, this is just an extension of the social developer concept that will ultimately drive social across the enterprise. What are some of the various ways social will be making its way into every area of enterprise organizations? How will it be utilized and what kinds of applications are going to be needed to facilitate and maximize these changes? Check out Oracle’s vision for the social-enabled enterprise. It’s a high-level overview of how social will impact across the enterprise. For example: HR can leverage social in recruiting and retentionSales can leverage social as a prospecting toolMarketing can use social to gain market insightCustomer support can use social to leverage community support to improve customer satisfaction while reducing service costOperations can leverage social improve systems That’s only the beginning. Once sleeves get rolled up and social developers and innovators get to work, still more social functions will no doubt emerge. What makes Java one of, if not the most viable platform on which to build these new enterprise social applications? Java is certainly one of the best platforms on which to build social experiences because there’s such a large existing community of Java developers. This means you can affordably recruit talent, and it's possible to effectively solicit advice from the community through various means, including our new Social Developer Community. Beyond that, there are already some great proof points Java is the best platform for creating social experiences at scale. Consider LinkedIn and Twitter. Tell us more about the benefits of collaboration and more about what the Oracle Social Developer Community is. What opportunities does that offer up and what are some of the ways developers can actively participate in and benefit from that community? Much has been written about the overall benefits of collaborating with other developers. Those include an opportunity to introduce yourself to the community of social developers, foster a reputation, establish an expertise, contribute to the advancement of the space, get feedback, experiment with the latest concepts, and gain inspiration. In short, collaboration is a tool that must be applied properly within a framework to get the most value out of it. The OSDC is a place where social developers can congregate to discuss the opportunities/challenges of building social integrations into their applications. What “needs” will this community have? We don't know yet. But we wanted to create a forum where we can engage and understand what social developers are thinking about, excited about, struggling with, etc. The OSDL can then step in if we can help remove barriers and add value in a serious and committed way so Oracle can help drive practice development.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Tips to Make Your Website Cell Phone Friendly

    - by Aditi
    Working on a new website design? or Redesigning your website? There is a lot more to consider now a days not just user experience, clean code, CSS etc. one of the important attribute one must not miss, which is making them mobile friendly! With the growing use of handhelds & unlimited data plans, people browse on their cellphones! and All come in different sizes! it is tough to make a website that would look great not just on a high resolution widescreen monitor/LCD, but also should look equally impressive on the low resolutions of cellphones. We are today going to discuss about such factors that can help you make a website Cellphone Friendly. Fluid Width Layouts As we start discussing about this, Most people speak of the Fluid Width Layouts as vital step in moving your website to be mobile friendly. Fluid width allows the width of your website stretch or shrink depending on the browser size. However, having a layout which flows with the width of the screen’s resolution is certainly convenient, more often than not the website was originally laid out for a desktop in mind. Compressing a fluid layout to 320 pixels can do some serious damage to layout, Thus some people strongly believe it is far better to have a mobile style sheet and lay out the content specifically for that screen and have more control on the display. The best thing to do is to detect the type of platform that is connected to your website and disabling or changing some tools and effects to make it look better if not perfect. Keep Your Web Pages Short length One must avoid long pages on their website, a lot of scroll makes it very non user friendly for people, especially on mobile devices this is a huge draw back because of the longer load time it takes to download the webpage. Everyone likes crisp & concise content such pages are easier to load & browse. This makes your website accessible across all platforms. Also try to keep shorter urls, if they have to type..save them from that much work especially if someone is using a cellphone with no QWERTY keyboard it can be tough. Usable Navigation & Search Unlike Desktops, your website’s Navigation won’t super work on a cellphone. Keep in mind the user experience for cellphone users as you design your Navigation. Try to keep your content centered as they do have difficulty in reading the webpage. I always look upto Google and their pages as available on mobile as a great example. Keeping a functional & very visible search bar helps mobile users navigate by searching. Understanding Clean Website Code : Evolved for Mobile Clean code is important when you consider the diversity out there for handheld devices. Some cell phones may only understand WAP. More capable phones may understand WAP2, which allows rendering websites with XHTML and CSS. Most mobiles won’t display tables, floats, frames, JavaScript, and dynamic menus. Most cellphone will not support cookies. Devices at the high end of the mobile market such as BlackBerry, Palm, or the upcoming iPhone are highly capable and support nearly as much as a standard computer..but masses still do not have such phones. You can use specific emulators to test your website on mobile devices. Make sure your color combinations provide good contrast between foreground and background colors, particularly for devices with fewer color options.

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • Oracle’s New Approach to Cloud-based Applications User Experiences

    - by Oracle OpenWorld Blog Team
    By Misha Vaughan It was an exciting Oracle OpenWorld this year for customers and partners, as they got to see what their input into the Oracle user experience research and development process has produced for cloud-delivered applications. The result of all this engagement and listening is a focus on simplicity, mobility, and extensibility. These were the core themes across Oracle OpenWorld sessions, executive roundtables, and analyst briefings given by Jeremy Ashley, Oracle's vice president of user experience. The highlight of every meeting with a customer featured the new simplified UI for Oracle’s cloud applications.    Attendees at some sessions and events also saw a vision of what is coming next in the Oracle user experience, and they gave direct feedback on whether this would help solve their business problems.  What did attendees think of what they saw this year? Rebecca Wettemann of Nucleus Research was part of  an analyst briefing on next-generation user experiences from Oracle. Here’s what she told CRM Buyer in an interview just after the event:  “Many of the improvements are incremental, which is not surprising, as Oracle regularly updates its application,” Rebecca Wettemann, vice president of Nucleus Research, told CRM Buyer. "Still, there are distinct themes to this latest set of changes. One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually—the intuitive nature of the application and the design work they have done with this goal in mind. The software uses as few buttons and fields as possible," she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."  What else did we hear? Oracle OpenWorld is a time when we can take a broader pulse of our customers’ and partners’ concerns. This year we heard some common user experience themes on the following: · A desire to continue to simplify widely used self-service tasks · A need to understand how customers or partners could take some of the UX lessons learned on simplicity and mobility into their own custom areas and projects  · The continuing challenge of needing to support bring-your-own-device and corporate-provided mobile devices to end users · A desire to harmonize user experiences across platforms for specific business-use cases  What does this mean for next year? Well, there were a lot of things we could only show to smaller groups of customers in our Oracle OpenWorld usability labs and HQ lab tours, to partners at our Expo, and to analysts under non-disclosure agreements. But we used these events as a way to get some early feedback about where we are focusing for the year ahead. Attendees gave us a positive response: @bkhan Saw some excellent UX innovations at the expo “@usableapps: Great job @mishavaughan and @vinoskey on #oow13 UX partner expo!” @WarnerTim @usableapps @mishavaughan @vinoskey @ultan Thanks for an interesting afternoon definitely liked the UX tool kits for partners. You can expect Oracle to continue pushing themes of simplicity, mobility, and extensibility even more aggressively in the next year.  If you are interested to find out what really goes on in the UX labs, such as what we are doing with smartphones, tablets, heads-up displays, and the AppsLab robots, feel free to reach out to me for more information: Misha Vaughan or on Twitter: @mishavaughan.

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Reminder: Benefícios da Virtualização para ISVs - 14/Dez/10, Porto

    - by Paulo Folgado
    Esta formação aborda as principais dificuldades com que os Independent Software Vendors (ISVs) se confrontam quando têm de escolher as plataformas sobre as quais irão certificar, instalar e suportar as suas aplicações, e como o Oracle VM (e o Oracle Enterprise Linux) os podem ajudar a ultrapassar essas dificuldades. O modelo de negócio clássico de um ISV - desenvolver uma solução aplicacional para resolver um determinado problema de negócio, analizar o mercado para determinar quais os sistemas operativos e o hardware que os clientes do seu mercado alvo usam, e decidir suportar as plataformas hardware e software que 80% dos seus clientes do seu mercado alvo usam (e tratar como excepções outras configurações que lhe sejam solicitadas por alguns clientes importantes) - funcionou bem no anos 80 e princípios dos anos 90, quando havia uma menor diversidade de plataformas. Contudo, com o aparecimentos nos últimos anos de múltiplas versões de sistemas operativos e de "sabores" Linux, este modelo começou a tornar-se um pesadelo. Cada cliente tem a sua plataforma de eleição e espera dos ISV que suportem essas suas opções, o que constitui um sorvedouro dos recursos e dos custos dos ISVs. As tecnologias de virtualização da Oracle, ao permitirem "simular" uma determinada configuração de hardware, fazendo com que o sistema operativo "pense" que está correr numa configuração de hardware pré-definida e normalizada, na qual correm as aplicações, constituem um veículo excelente para os ISVs que procuram uma solução simples, fácil de instalar e fácil de suportar para instalação das suas aplicações, permitindo obter grandes economias de custos em termos de desenvolvimento, teste e suporte dessas aplicações. Quem deve assistir? Esta formação dirige-se sobretudo a quem que tomar decisões sobre as plataformas tecnológicas que o ISV tem de suportar, assim como a quem lida com a estrutura de custos da suas operações, com uma visão dos custos associados ao desenvolvimento, certificação, instalação e suporte de múltiplas plataformas. Se quer saber mais sobre o Oracle VM e como ele pode ajudar a reduzir drasticamente os sues custos, não perca esta formação. AGENDA: 09:00 Welcome & Introduction  ISV Partner View... Why Use Virtualization?   The ISV Deployment Dilemma: The Problem of Supporting Multiple Platforms  How can Virtualization Help?  The use of Templates What is a Template?  How are Templates Created?  Customer's Point of View  Assembly Builder  Weblogic Virtual Edition Managing Oracle VM Best Practices for Virtualizing Oracle Database 11g  Managing Virtual Environments  Coffee Break   Oracle Complete and Integrated Virtualization Portfolio From Datacenter to Desktop  The Next Generation Virtualization  Private Cloud with Middleware Virtualization  Benefits of Using Oracle VM (and Oracle Enterprise Linux) Support Advantages  Production Ready Virtual Machines  Licensing Terms  Partner Resources and OPN Benefits  12:45 Q&A and Wrap-up  Data: 14 de Dezembro - 09h00 / 13h00Local: Oracle Portugal, Av. da Boavista, 1837- Edifício Burgo - Escritório 13.4, 4100-133 PORTO Audiência: Responsáveis de Desenvolvimento, de Tecnologia e Serviços dos parceiros ISV da Oracle Formação realizada pela Altimate

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >