Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 166/241 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • RTL (Arabic and Hebrew) Support for Windows Phone 7

    - by Daniel Moth
    Problem and Background Currently there is no support for Right-To-Left rendering in Windows Phone 7, when developing with Silverlight (itself built on .NET Compact Framework). When I encountered that limitation, I had a flashback to 2005 when I complained about the luck of RTL on NETCF. Unfortunately, the partial solution I proposed back then requires PInvoke and there is no such support on Windows Phone today. Fortunately, my RTL requirements this time were more modest: all I wanted to do was display correctly a translation (of Hebrew or Arabic) in my FREE WP7 translator app. For v1.0 of the app, the code received a string from the service and just put it up on the screen as the translated text. In Arabic and Hebrew, that string (incorrectly) appeared reversed. I knew that, but decided that since it is a platform limitation, I could live with it and so could the users. Yuval P, a colleague at Microsoft, pushed me to offer support for Hebrew (something that I wasn't motivated to pursue if I am honest). After many back and forths, we landed on some code that works. It is certainly not the most efficient code (quite the opposite), but it works and met the bar of minimum effort for v1.1. Thanks Yuval for insisting and contributing most of the code! After Hebrew support was there, I thought the same solution would work for Arabic. Apparently, reversing the Arabic text is not enough: Arabic characters render themselves differently dependent on what preceded/succeeds them(!). So I needed some kind of utility that takes a reversed Arabic string and returns the same string but with the relevant characters "fixed". Luckily, another MS colleague has put out such a library (thanks Bashar): http://arabic4wp7.codeplex.com/. RTL Solution So you have a reversed RTL string and want to make it "right" before displaying on the screen. This is what worked for me (ymmv). Need to split the string into "lines". Not doing this and just reversing the string and sticking it a wrapping text control means that the user not only has to read right to left, they also have to read bottom up. The previous step must take into account a line length that works for both portrait and landscape modes, and of course, not break words in the middle, i.e. find natural breaks. For each line, break it up into words and reverse the order of the words and the order of the letters within each word On the previous step, do not reverse words that should be preserved, e.g. Windows and other such English words that are mixed in with the Arabic or Hebrew words. The same exclusion from reversal applies to numbers. Specifically, for Arabic, once there is a word that is reversed also change its characters. For some code paths, the above has to take into account whether the translation is "from" an RTL language or if it is "to" an RTL language. I packaged the solution in a single code file containing a static class (see the 'Background" section above for… background and credits). Download RTL.cs for your Windows Phone app (to see its usage in action download for FREE "The best translator app") Enjoy, and if you decide to improve on the code, feel free to share back with me… Comments about this post welcome at the original blog.

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • Where should I start with debugging my exchange server?

    - by joadha
    I'm (foolishly?) attempting to install Exchange on top of Windows Server 2008 (64-bit) over Virtual Box running on Mac OSX Lion. Everything went smoothly until I got to the "Readiness Checks" tab. Readiness failed spectacularly on Hub Transport Role and Mailbox Role prereqs. Before I go too far down the rabbit hole in attempting to remedy this, I was hoping I could get some input on where to start in all of this. I already set up the following Active Directory roles, but it didn't seem to help: Active Directory Domain Services Active Directory Lightweight Directory Services I also enabled an Application Server role. Those three are the only roles I've set up-- I cannot locate within Server Manager the Organization Manager role or any of the other roles referenced in the list of borkedness below. Is this a typical experience in Exchange installation? Is there a tutorial created by somebody outside of Microsoft? Here's the output from Readiness Checks: Summary: 5 item(s). 2 succeeded, 3 failed. Elapsed time: 00:00:53 Configuring Prerequisites Completed Elapsed Time: 00:00:00 Languages Prerequisites Completed Elapsed Time: 00:00:09 Hub Transport Role Prerequisites Failed Error: Active Directory does not exist or cannot be contacted. Click here for help... http://go.microsoft.com/fwlink/? linkid=30939&l=en&v=ExBPA.14&id=51e5500d-8b18-4eee-bb8e-925d063b60a1 Error: You must be a member of the 'Organization Management' role group or 'Enterprise Admins' group to continue. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=1d750594-9222-44d7-8f80-45e522e889e6 Error: Setup encountered a problem while validating the state of Active Directory: Could not find information about the local domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: You must be logged on as an Exchange organization administrator to install or upgrade the first Hub Transport server role in the topology. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=e58f51fd-2c66-4a4b-914a-628dccf9a09f Error: The 'IIS 6 Metabase Compatibility' component is not installed. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=0a71c4f6-68de-40f7-94cf-74b73cbda37b Error: The 'IIS 7 Basic Authentication' component required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Windows Authentication' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 .NET Extensibility' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=5f29a861-f472-4f11-a23a-04155373f5ed Error: This computer is not part of a Windows domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: The user is not logged on to a Windows domain Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Warning: This computer requires the Microsoft Office 2010 Filter Packs. Please install the software from http://go.microsoft.com/fwlink/?LinkID=191548 Warning: The 'IIS 6 Management Console' component is recommended as it allows for the administration of all server roles. Install the component via Server Manager. Warning: Setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server 10.1.10.1. Elapsed Time: 00:00:15 Client Access Role Prerequisites Failed Error: Active Directory does not exist or cannot be contacted. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=51e5500d-8b18-4eee-bb8e-925d063b60a1 Error: Unable to read data from the Metabase. Ensure that Microsoft Internet Information Services is installed. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=a4a4d339-4009-4fb7-b842-ca2ba79f13f0 Error: The World Wide Web (W3SVC) service is either disabled or not installed on this computer. You must exit Setup, install the required component, then restart the Setup process. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=9eeaa77f-4d46-4d9a-9c36-f262a075392b Error: You must be a member of the 'Organization Management' role group or 'Enterprise Admins' group to continue. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=1d750594-9222-44d7-8f80-45e522e889e6 Error: Setup encountered a problem while validating the state of Active Directory: Could not find information about the local domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: You must be logged on as an Exchange organization administrator to install or upgrade the first Client Access server role in the topology. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=e58f51fd-2c66-4a4b-914a-628dccf9a09f Error: The 'IIS 6 Metabase Compatibility' component is not installed. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=0a71c4f6-68de-40f7-94cf-74b73cbda37b Error: The 'IIS 6 Management Console' component is not installed. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=0a71c4f6-68de-40f7-94cf-74b73cbda37b Error: The 'IIS 7 Dynamic Content Compression' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/? linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Static Content Compression' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Basic Authentication' component required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Windows Authentication' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Digest Authentication' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 .NET Extensibility' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=5f29a861-f472-4f11-a23a-04155373f5ed Error: This computer is not part of a Windows domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: The user is not logged on to a Windows domain Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Warning: Setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server 10.1.10.1. Elapsed Time: 00:00:14 Mailbox Role Prerequisites Failed Error: Active Directory does not exist or cannot be contacted. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=51e5500d-8b18-4eee-bb8e-925d063b60a1 Error: Unable to read data from the Metabase. Ensure that Microsoft Internet Information Services is installed. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=a4a4d339-4009-4fb7-b842-ca2ba79f13f0 Error: The World Wide Web (W3SVC) service is either disabled or not installed on this computer. You must exit Setup, install the required component, then restart the Setup process. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=9eeaa77f-4d46-4d9a-9c36-f262a075392b Error: You must be a member of the 'Organization Management' role group or 'Enterprise Admins' group to continue. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=1d750594-9222-44d7-8f80-45e522e889e6 Error: Setup encountered a problem while validating the state of Active Directory: Could not find information about the local domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: You must be logged on as an Exchange organization administrator to install or upgrade the first Mailbox server role in the topology. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=e58f51fd-2c66-4a4b-914a-628dccf9a09f Error: The 'IIS 6 Metabase Compatibility' component is not installed. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=0a71c4f6-68de-40f7-94cf-74b73cbda37b Error: The 'IIS 6 Management Console' component is not installed. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=0a71c4f6-68de-40f7-94cf-74b73cbda37b Error: The 'IIS 7 Basic Authentication' component required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 Windows Authentication' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=41a25c5e-0d39-4e55-a1f0-7be885982236 Error: The 'IIS 7 .NET Extensibility' component is required. Install the component via Server Manager. Click here for help... http://go.microsoft.com/fwlink/?linkid=30939&l=en&v=ExBPA.14&id=5f29a861-f472-4f11-a23a-04155373f5ed Error: This computer is not part of a Windows domain. Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Error: The user is not logged on to a Windows domain Click here for help... http://technet.microsoft.com/en-US/library/ms.exch.err.default(EXCHG.141).aspx?v=14.1.218.11&e=ms.exch.err.Ex28883C&l=0&cl=cp Warning: This computer requires the Microsoft Office 2010 Filter Packs. Please install the software from http://go.microsoft.com/fwlink/?LinkID=191548 Warning: Setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server 10.1.10.1. Elapsed Time: 00:00:14

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • A View from the Top – Jan Ackerman (VP APAC Recruiting)

    - by user769227
    This week, Headhunt Magazine in Singapore, took the opportunity to publish an interview with Jan Ackerman who is Vice President for Recruitment for Asia Pacific here at Oracle. The link to the online interview can be found here. Below is the interview in full that was published in Headhunt Magazine.  A View from the Top – Jan Ackerman Written by HeadHunt on August 16, 2012 · Leave a Comment By Susheela Menon Jan Ackerman is the Vice President for Recruiting in Asia Pacific and Japan at Oracle. Which particular personal trait do you attribute your professional success to? Perseverance has been the most important trait that has attributed to my professional success. Endurance and perseverance combined to win in the end has always been a great credo. I find that this trait carries through in my professional as well as my personal life. I enjoy sport fishing and find that perseverance with a great deal of patience in this hobby is critical to the overall enjoyment and success in this sporting activity. In the same way, this doggedness – steadfastness with persistence – and tenacity toward an unyielding course of action has served me well in reaching goals and thus greater success. What’s the biggest challenge you have faced in your career so far? I have to constantly keep pace with ever changing technology in my career. The industry changes rapidly and requires me to stay on top of the latest trends and advancements. Outside of work, I like to develop software as a hobby and in order to ensure that what I am developing will meet what the business needs, I have to continually innovate and stay current on the latest trends in the industry to deliver a solution that will delight the end- user. Best career advice you have ever received. Always be forthright and honest with your customers and peers; mixed with a “Can Do” attitude, a great and fulfilling career can be yours to have and hold. What makes Oracle a great place to be in? The freedom to innovate and pave new avenues of success is one of the greatest things about working here at Oracle. We are always looking to grow and improve our business for our customers and we are always adapting to present and future industry demands. This means we are always looking to change, to perform better and to do things differently. All these create a culture and spirit of innovation and success. What motivates you to be in the HR sector? I really like working with and helping people. HR is all about “the people” in the organisation, and staying focused every day on making things better for the Oracle team gives me a great deal of happiness. Describe your leadership style. I am very direct and goal- oriented. I provide ideas and guidance and then give the team all the freedom they need to reach a successful outcome. I can also be a very “roll up your sleeves” kind of manager when the task needs a bit of a push. What’s the biggest business challenge you see in your industry right now? The ability to keep pace with all the convergence in the industry and to continue to stay focused on delivering top talent to serve Oracle’s customers well. Our unique Recruiting Model has served us well in meeting these needs. We are well-placed in this goal and look forward to maintain Oracle’s leadership role in the industry.

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • A Forming Repository of Script Samples for Automating Windows Server 2012 and Windows 8

    - by Jialiang
    Compared with Windows Server 2008/R2 that provides about 230 cmdlets, Windows Server 2012 beats that by a factor of over 10 shipping ~ 2,430 cmdlets.  You can automate almost every aspect of the server.   The new PowerShell 3.0, like Windows Server 2012, has a ton of new features.  In this automation script-centric move, Microsoft All-In-One Script Framework (AIOSF) is ready to support IT Pros with many new services and offerings coming this year.  We sincerely hope that the IT community will benefit from the effort. Here is the first one among our new services and offerings:  The team is preparing a large set of Windows 8 / Windows Server 2012 script samples based on frequently asked IT tasks that we collect in TechNet forums and support calls to Microsoft.   Because the script topics come from frequently asked IT tasks, we hope that these script samples can be helpful to many IT Pros worldwide.   With the General Availability of Windows Server 2012, we release the first three Windows Server 2012 / Windows 8 script samples today.    Get Network Adapter Properties in Windows Server 2012 and Windows 8 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/Get-Network-Adapter-37c5a913 Description: This script could be used to get network adapter properties and advanced properties in Windows Server 2012 and Windows 8. It combines the outputs of Get-NetAdapter and Get-NetAdapterAdvancedProperty. It can generate a report of network adapter configuration settings. Use Scenarios: In a real world, IT Administrators are required to check the configuration of network adapters after the deployment of new servers. One typical example is the duplex setting of network adapters. Also, IT administrators need to maintain a server list which contains network adapter configuration settings in a regular basis. Before Windows Server 2012, IT administrators often feel difficulties to handle these tasks. Acknowledgement: Thanks Greg Gu from AIOSF for collecting this script topic, and writing the script sample.  Thanks James Adams (Microsoft Premier Field Engineer) for reviewing the script sample and ensuring its quality.   How to batch create virtual machines in Windows Server 2012 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/How-to-batch-create-9efd1811 Description: This PowerShell Script illustrates how to batch create multiple virtual machines based on comma delimited file by using PowerShell 3.0 in Windows Server 2012. Use Scenarios: IT admin requires to batch creating virtual machines in Windows Server 2012, although they can use few commands due to the lack of programming knowledge. Although it’s a set of Hyper-V command-lets within Windows PowerShell, IT Admins are reluctant to use them except simple a command which is widely used. Acknowledgement: Thanks Anders Wang from AIOSF for collecting this script topic and writing the script sample.  Thanks Christopher Norris for reviewing the script sample and ensuring its quality before publishing.   Remove Windows Store Apps in Windows 8 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4 Description: This script can be used to remove multiple Windows Store Apps from a user account in Windows 8. It provides a list of installed Windows Store applications. You can specify the application IDs, and remove them all at once. Use Scenarios: 1. In Windows 8, you can remove a single Windows Store App by right-clicking the tile in the Start menu and choosing the uninstall command.  However, no command is provided for removing multiple Windows Store Apps all at once. If you want to do so, you can use this script sample. 2. Sometimes Windows Store Apps may crash in Windows 8.  Even though you can successfully uninstall and reinstall the App, the application may still crash after the reinstallation.  In this situation, you can use this example script to remove these Windows Store Apps cleanly. Acknowledgement: Thanks Edward Qi from AIOSF for collecting the script idea and composing the script sample.  Thanks James Adams (Microsoft Premier Field Engineer) for reviewing the script sample and ensuring its quality.   This is just the beginning, and more and more script samples are coming.  You can follow our blog (http://blogs.technet.com/b/onescript) to get the latest customer-driven script samples for Windows Server 2012 and Windows 8.

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • JRE not working on firefox

    - by user1488595
    I am attempting to get JRE 7 run in firefox in ubuntu 12.04, 32 bit. I've tried to follow this article: www.liberiangeek.net/2012/04/install-oracle-java-runtime-jre-7-in-ubuntu-12-04-precise-pangolin/ . I've also tried this repository: www.webupd8.org/2012/06/how-to-install-oracle-java-7-in-debian.html As well as installing JDK, which contains JRE, by following this article: www.liberiangeek.net/2012/04/install-oracle-java-jdk-7-in-ubuntu-12-04-precise-pangolin/ With all above method of installation, I get the following error in firefox console when I run applet with firefox: java.io.IOException: Cannot run program "/usr/lib/jvm/jre1.7.0/bin/java": error=13, Permission denied at java.lang.ProcessBuilder.start(Unknown Source) at sun.plugin2.jvm.JVMLauncher.start(Unknown Source) at sun.plugin2.main.server.JVMInstance.startImpl(Unknown Source) at sun.plugin2.main.server.JVMInstance.start(Unknown Source) at sun.plugin2.main.server.JVMManager.getOrCreateBestJVMInstance(Unknown Source) at sun.plugin2.main.server.JVMManager.startAppletImpl(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.maybeStartApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.setWindow(Unknown Source) Caused by: java.io.IOException: error=13, Permission denied at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 10 more java.io.IOException at sun.plugin2.main.server.JVMInstance.startImpl(Unknown Source) at sun.plugin2.main.server.JVMInstance.start(Unknown Source) at sun.plugin2.main.server.JVMManager.getOrCreateBestJVMInstance(Unknown Source) at sun.plugin2.main.server.JVMManager.startAppletImpl(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.maybeStartApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.setWindow(Unknown Source) Caused by: java.io.IOException: Cannot run program "/usr/lib/jvm/jre1.7.0/bin/java": error=13, Permission denied at java.lang.ProcessBuilder.start(Unknown Source) at sun.plugin2.jvm.JVMLauncher.start(Unknown Source) ... 8 more Caused by: java.io.IOException: error=13, Permission denied at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 10 more Exception in thread "main" java.lang.RuntimeException: java.io.IOException at sun.plugin2.main.server.JVMManager.getOrCreateBestJVMInstance(Unknown Source) at sun.plugin2.main.server.JVMManager.startAppletImpl(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.JVMManager.startApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.maybeStartApplet(Unknown Source) at sun.plugin2.main.server.MozillaPlugin.setWindow(Unknown Source) Caused by: java.io.IOException at sun.plugin2.main.server.JVMInstance.startImpl(Unknown Source) at sun.plugin2.main.server.JVMInstance.start(Unknown Source) ... 6 more Caused by: java.io.IOException: Cannot run program "/usr/lib/jvm/jre1.7.0/bin/java": error=13, Permission denied at java.lang.ProcessBuilder.start(Unknown Source) at sun.plugin2.jvm.JVMLauncher.start(Unknown Source) ... 8 more Caused by: java.io.IOException: error=13, Permission denied at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 10 more I've tried to type: sudo chmod 777 /usr/lib/jvm/jre1.7.0/bin/java It did not work. I also tried to run Eclipse, which requires JRE to run. It did not work originally(it works now), returning the following error: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-3740 in java.library.path no swt-gtk in java.library.path Can't load library: /home/username/.swt/lib/linux/x86/libswt-gtk-3740.so Can't load library: /home/usename/.swt/lib/linux/x86/libswt-gtk.so at org.eclipse.swt.internal.Library.loadLibrary(Library.java:285) By running "ln -s /usr/lib/jni/libswt-* ~/.swt/lib/linux/x86/" (Thanks, stackoverflow.com/questions/10970754/cant-open-eclipse-in-ubuntu-12-04-java-lang-unsatisfiedlinkerror-could-not-l), Eclipse works again. I have been googling this for days, without luck. Any response would be appreciated.

    Read the article

  • Attend Onsite Product Usability Testing or Tour Oracle HQ Usability Labs during Oracle OpenWorld 2014

    - by gaamoth-Oracle
     By Gozel Aamoth, Oracle Applications User Experience Oracle OpenWorld  is the world’s largest business and technology event, featuring thousands of sessions, including keynotes, technical sessions, demos, and hands-on labs. Hundreds of exhibitors will be sharing what they’re bringing to Oracle technology at this year’s conference, held in downtown  San Francisco from Sept. 29-Oct. 2. If you are an Oracle customer or partner planning to attend this  annual event, there are several ways to  meet face-to-face with members of the Oracle Applications  User Experience (UX) team. We’d like  to invite you to sign up for a usability feedback session, or  hop on one of our special chartered buses  to tour Oracle HQ’s usability labs. Here’s more  information about these exclusive events. Onsite product usability testing: Give us your feedback! Product usability testing is in progress at Oracle OpenWorld 2013. The Oracle Applications User Experience team will host an onsite usability lab, where Oracle customers and partners can participate in a usability feedback session, at Oracle OpenWorld 2014. Usability experts, product managers, and user interface designers have teamed up to provide Oracle customers and partners with the opportunity to contribute to and influence application design and direction while test-driving Oracle’s next-generation applications. Your feedback will affect the existing and future usability of Oracle applications, and help us develop applications that are intuitive and easy to use. What will we test? Participants will get a preview of proposed Oracle product designs for Oracle Human Capital Management Cloud and Oracle Sales Cloud, Oracle Fusion applications for Procurement and Supply Chain, Oracle E-Business Suite, PeopleSoft applications, Social Relationship Management, BI applications, Fusion Middleware, and more. Who can participate*? Regardless of your current job title, we have a session that might interest you. These one-on-one feedback sessions are popular, and space is very limited, so contact us  today to learn more. Dates: Sept. 29 – Oct. 1, 2014  Location: InterContinental Hotel, San Francisco, CA  Time: Advance sign-up is required for this event. RSVP now. If you have questions about this event, please contact Angela Johnston.  Take a tour of the Oracle HQ Usability Lab during OpenWorld 2014Members of Applications UX team lead Oracle OpenWorld lab tour attendeesto the usability labs at Oracle headquarters in Redwood City, CA. The Applications User Experience team will be offering a limited number of usability lab tours  at Oracle Headquarters in Redwood City, Calif., during Oracle OpenWorld 2014. Come take a look behind the scenes of Oracle’s research and development work on Thursday, Oct. 2, or Friday, Oct. 3. Receive an exclusive look into how Oracle tests applications designs, and see the direction that Oracle’s enterprise applications are heading, including demos of designs for devices such as the tablet and smartphone. Round-trip transportation will be provided. Pick-up and drop-off is at the InterContinental Hotel in San Francisco, next to Moscone West. Spots are limited, so sign up today! How to reserve your spot To RSVP, sign up here. For additional questions, send an e-mail to Jeannette Chadwick. To learn more about our team’s presence at Oracle OpenWorld this year, please visit our website, UsableApps. *Participation requires that your company or organization has a Customer Participation Confidentiality Agreement (CPCA) on file. If your company or organization does not have a CPCA on file, we will start this process.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Three Key Tenets of Optimal Social Collaboration

    - by kellsey.ruppel
    Today's blog post comes to us from John Bruswick! This post is an abridged version of John’s white paper in which he discusses three principals to optimize social collaboration within an enterprise.   By [email protected], Oracle Principal Sales Consultant Effective social collaboration is actionable, deeply contextual and inherently derives its value from business entities outside of itself. How does an organization begin the journey from traditional, siloed collaboration to natural, business entity based social collaboration? Successful enablement of enterprise social collaboration requires that organizations embrace the following tenets and understand that traditional collaborative functionality has inherent limits - it is innovation and integration in accordance with the following tenets that will provide net-new efficiency benefits. Key Tenets of Optimal Social Collaboration Leverage a Ubiquitous Social Fabric - Collaborative activities should be supported through a ubiquitous social fabric, providing a personalized experience, broadcasting key business events and connecting people and business processes.  This supports education of participants working in and around a specific business entity that will benefit from an implicit capture of tacit knowledge and provide continuity between participants.  In the absence of this ubiquitous platform activities can still occur but are essentially siloed causing frequent duplication of effort across similar tasks, with critical tacit knowledge eluding capture. Supply Continuous Context to Support Decision Making and Problem Solving - People generally engage in collaborative behavior to obtain a decision or the resolution for a specific issue.  The time to achieve resolution is referred to as "Solve Time".  Users have traditionally been forced to switch or "alt-tab" between business systems and synthesize their own context across disparate systems and processes.  The constant loss of context forces end users to exert a large amount of effort that could be spent on higher value problem solving. Extend the Collaborative Lifecycle into Back Office - Beyond the solve time from decision making efforts, additional time is expended formalizing the resolution that was generated from collaboration in a system of record.  Extending collaboration to result in the capture of an explicit decision maximizes efficiencies, creating a closed circuit for a particular thread.  This type of structured action may exist today within your organization's customer support system around opening, solving and closing support issues, but generally does not extend to Sales focused collaborative activities. Excelling in the Unstructured Future We will always have to deal with unstructured collaborative processes within our organizations.  Regardless of the participants and nature of the collaborate process, two things are certain – the origination and end points are generally known and relate to a business entity, perhaps a customer, opportunity, order, shipping location, product or otherwise. Imagine the benefits if an organization's key business systems supported a social fabric, provided continuous context and extended the lifecycle around the collaborative decision making to include output into back office systems of record.   The technical hurdle to embracing optimal social collaboration would fall away, leaving the company with an opportunity to focus on and refine how processes were approached.  Time and resources previously required could then be reallocated to focusing on innovation to support competitive differentiation unique to your business. How can you achieve optimal social collaboration? Oracle Social Network enables business users to collaborate with each other using a broad range of collaboration styles and integrates data from a variety of sources and business applications -- allowing you to achieve optimal social collaboration. Looking to learn more? Read John's white paper, where he discusses in further detail the three principals to optimize social collaboration within an enterprise. 

    Read the article

  • Java in Flux: Utopia or Deuteranopia?

    - by Tori Wieldt
    What a difference a year makes, indeed. Steve Harris, Senior VP, App Server Dev, Oracle and Adam Messinger, VP, Fusion Middleware Group, Oracle presented an informative keynote at the TheServerSide Java Symposium today. With a title "Java in Flux: Utopia or Deuteranopia?" you know things are going to be interesting (see Aeon Flux if you don't get the title reference).What a YearThey started with a little background, explaining that the reactions to Oracle's acquisition of Sun (and therefore Java) one year ago varied greatly, from "Freak Out!" to "Don't Panic." From the Oracle perspective, being the steward of and key contributor to Java requires a lot of sausage making.  They admitted to Oracle's fair share of Homer Simpson-esque "D'oh" moments in the past year, which was complicated by Oracle's communication style.   "Oracle has a tradition has a saying a few things and sticking by then, in contrast to Sun who was much more open," Adam explained. "We laid out the Java roadmap and are executing on it, and we hope that speaks to our commitment."Java SEAdam talked about having a long term perspective on the Java language (20+ years), letting ideas mature in more experimental languages, then bringing them into Java. Current priorities include: JVM convergence (getting the best features of JRockit into Hotspot); support of parallel/multi-core programming, and of course, all the improvements in JDK7. The JDK7 Developer Preview is underway (please download now and report bugs!). The Oracle development team is also working on Lambda and modularity (Jigsaw) for SE 8. Less certain, but also under discussion are improvements for Java SE 9. Adam is thinking of it as a "back to basics" release. He mentioned reworking JNI, improving data integration and improved device support.Java EE To provide context about Java EE, Steve said Java EE was great at getting businesses on the internet. The success of Java EE resulted in an incredible expansion of the middleware marketplace for developers and vendors.  But with success, came more. Java EE kept piling on capabilities, but that created excess baggage.  Doing simple things was no longer so simple. That's where Java community is so valuable: "When Java EE was too complex and heavyweight, many people were happy to tell us what we were doing wrong and popularize solutions," Steve explained. Because of that feedback, the Java EE teams focused on making things simple again: POJOs and annotations, and leveraging changes in Java SE.  Steve said that "innovation doesn't happen in expert groups, it happens on the ground where developers are solving problems," and platform stewards need to pay attention and take advantage of changes that are taking place.Enter the Cloud "Developers are restless, they want cloud functionality from their own IT dept" Steve explained. With the cloud, the scope of problem has expanded to include the data center itself, with multiple tenants. To move forward, existing APIs in Java EE need to be updated to be tenant-aware, service-enabled, and EE needs to support various styles of deployment. The goal is to get all that done in Java EE 8.Adam questioned Steve about timing and schedule. "Yes, the schedule is aggressive, but it'll work" Steve said. Then Adam asked about modularization. If Java SE 8 comes out at the end of 2012, when can Java EE deliver modularization? Steve suggested that key stakeholders can come with up some pre-SE 8 agreement on how to expose the metadata about modules. He then alluded to Mark Reinhold and John Duimovich's keynote at EclipseCON next week. Stay tuned.Evil Master PlanIn conclusion, Adam finally admitted to Oracle's Evil Master Plan: 1) Invest in and improve Java SE and EE 2) Collaborate with the community 3) Broaden the marketplace for Java development. Bwaaaaaaaaahahaha! <rubs hands together>Key LinksJDK7 Developer Preview  http://jdk7.java.net/preview/Oracle Technology Network http://www.oracle.com/technetwork/java/index.htmlTheServerSide Java Symposium  http://javasymposium.techtarget.com/"Utopia or Deuteranopia?" http://en.wikipedia.org/wiki/Aeon_Flux

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >