Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 109/134 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • 6 Reasons Why You Can’t Move Your Cell Phone To Any Carrier You Want

    - by Chris Hoffman
    You can buy a laptop or Wi-Fi tablet and use it on Wi-Fi anywhere in the world, so why are cell phones and devices with mobile data not portable between different cellular networks in the same country? Unlike with Wi-Fi, there are many different competing cellular network standards — both around the world and within countries. Cellular carriers also like locking you to their specific network and making it difficult to move. That’s what contracts are for. Phone Locking Many phones are sold locked to a specific network. When you buy a phone from a cellular carrier, they often lock that phone to their network so you can’t take it to a competitor’s network. That’s why you’ll often need to unlock a phone before you can move it to a different cellular provider or take it to a different country and use it on a local provider instead of roaming. Cellular carriers will generally unlock your phone for you as long as you’re no longer in a contract with them. However, unlocking a cell phone you’ve paid for without your carrier’s permission is currently a crime in the USA. GSM vs. CDMA Some cellular networks use the GSM (Global System for Mobile Communications) standard, while some use CDMA (Code-division multiple access). Worldwide, most cellular networks use GSM. In the USA, both GSM and CDMA are popular. Verizon, Sprint, and other carriers that use their networks use CDMA. AT&T, T-Mobile, and other carriers that use their networks are use GSM. These are two competing standards and are not interoperable. This means you can’t simply take a phone from Verizon to T-Mobile, or from AT&T to Sprint. These carriers have incompatible phones. CDMA Restrictions CDMA is more restricted than GSM. GSM phones have SIM cards. Simply open the phone, pop out the SIM card, and pop in a new SIM card to switch carriers. (In reality, it’s more complicated thanks to phone locking and other factors here.) CDMA phones don’t have removable modules like this. All CDMA phones ship locked to a specific network and you’d have to get both your old carrier and your new carrier to cooperate to switch phones between them. In reality, many people just consider CDMA phones eternally locked to a specific carrier. Frequencies Different cellular networks throughout the USA and the rest of the world use different frequencies. These radio frequencies have to be supported by your phone’s hardware or your phone simply can’t work on a network using those frequencies. Many GSM phones support three or four bands of frequencies — 900/1800/1900 MHz, 850/1800/1900 MHz, or 850/900/1800/1900 MHz. These are sometimes called “world phones” because they allow easier roaming. This allows the manufacturer to produce a phone that will support all GSM networks in the world and allows their customers to travel with those phones. If your phone doesn’t support the appropriate frequencies, it won’t work on certain networks. LTE Bands When it comes to newer, faster LTE networks, different frequencies are still a concern. LTE frequencies are generally known as “LTE bands.” To use a smartphone on a certain LTE network, that smartphone will have to support that LTE network’s frequency. Different models of phones are often created to work on different LTE networks around the world. However, phones are generally supporting more and more LTE networks and becoming more and more interoperable over time. SIM Card Sizes The SIM cards used in GSM phones come in different sizes. Newer phones use smaller SIM cards to save space and be more compact. This isn’t a big obstacle, as the different sizes of SIM cards — full-size SIM, mini-SIM, micro-SIM, and nano-SIM are actually compatible. The only difference between them is the size of the plastic card surrounding the SIM’s chip. The actual chip is the same size between all the SIM cards. This means you can take an old SIM card and cut the plastic off until it becomes a smaller-size SIM card that fits in a modern phone. Or, you can take a smaller-size SIM card and insert it into a tray so that it becomes a larger-size SIM card that fits in an older phone. Be aware that it’s very possible to damage your SIM card and make it not work properly by cutting it to the wrong dimensions. Your cellular carrier will often be able to cut your SIM card for you or give you a new one if you want to use an old SIM card in a new phone. Hopefully they won’t overcharge you for this service, too. Be sure to check what types of networks, frequencies, and LTE bands your phone supports before trying to move it between networks. You may have to buy a new phone when moving between certain cellular carriers. Image Credit: Morgan on Flickr, 22n on Flickr

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Oracle Database 12 c Training and Certification: What’s in it for Me?

    - by KJones
    Oracle Database 12c has officially launched! Through Oracle University, our expert instructors can introduce you to the features and functions of this new Oracle Database 12c product. Through training courses and certification exam prep seminars, you can build up your database knowledge and apply this knowledge to advance your career. Already an Oracle Database Expert? Why Oracle Database 12c Training is still a Good Idea Oracle is the industry leader for database technology and takes the release of new products very seriously. We continue to listen to customer needs and add features and functionality to address those needs. Oracle Database 12c is no exception. The following areas have been greatly enhanced and should be considered for your additional training or certification: • Database for Cloud Computing • Compression and Information Lifecycle Management (ILM) • Improved Performance & Scalability • Extreme Availability • Security Defense in Depth • Manageability Oracle Certified Database Administrators Reap Career Rewards Becoming an expert user of database technology through Oracle University's certification program widens your skill set to demonstrate your expertise implementing the most advanced database technology available. By doing so, you'll make yourself a more marketable employee and candidate in the job market.  Reasons to Become an Oracle Certified Database Administrator of Oracle Database 12c: • The new Oracle Database 12c certifications emphasize more advanced skills that align with industry standards, resulting in an even more valuable credential for customers and partners. • The Oracle Certified Associate (OCA) for Oracle Database 12c centers upon certification objectives that measure IT professionals' day-to-day skills, along with your ability to manage challenges. • Building upon all of the competencies incorporated into Oracle's Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. • The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. • Oracle offers 12c upgrade paths for existing Oracle Certified Professionals (OCP) and Oracle Certified Masters (OCM). Database 12c Training and Certification: Built with Your Input When creating Oracle Database 12c training courses and certifications, we wanted to know which tasks are most important in a DBA's day-to-day work. Instead of assuming what those tasks might be, we decided to develop a job task analysis survey for DBAs. The response rate from DBAs from around the world was overwhelming! The survey focused on the following key database areas: • DBA Core Essentials • Database Storage • High Availability • Scalability • Networking • Security • Very Large Database Administration • Distributed Databases After conducting this survey, we took this specific feedback and used it to help mold the new Oracle Database 12c training and certification curriculum. The benefit to you? You now have access to Oracle Database 12c courses and certification exams that were created with your specific on-the-job tasks in mind. Explore Oracle Database 12c Training & Certification Today Investing in Oracle Database 12c training courses and certifications will help you develop a great deal of knowledge, experience and expertise. Explore our portfolio of offerings to determine which skills you need as a DBA to get up-to-speed on Oracle Database 12c technology. Questions or comments about the new Oracle Database 12c offerings? Let us know in the comments below. - Diana Gray, Principle Curriculum Product Manager and Raza Siddiqui, Senior Principle Curriculum Product Manager

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam

    - by JuergenKress
    OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam This September during Oracle OpenWorld 2013 the weather in San Francisco, as you see can from the photo, was exceptionally sunny. The dramatic final few days of the Americas Cup sailing competition, being held every day in the bay, coincided with the conference and meant that there was almost a holiday feel to the whole event. Here's my annual round-up of what I think was most interesting at OpenWorld 2013 for Fusion Middleware architects and administrators; I hope you find it useful and if you think I've missed something please add a comment! WebLogic and Cloud Application Foundation (CAF) The big WebLogic release of the year has already happened a few months ago with 12.1.2 so I won't duplicate that here. Will Lyons discussed the WebLogic and Coherence roadmap which essentially is that 12.1.3 will probably be released to coincide with SOA 12c next year and that 12.1.4, the next feature-rich WebLogic release, is more likely to be in 2015. This latter release will probably include full Java EE 7 support, have enhancements for multi-tenancy and further auto-scaling features to support increased density (i.e. more WebLogic usage for the same amount of hardware). There's a new Oracle Virtual Assembly Builder (OVAB) out already and an Oracle Traffic Director (OTD) 12c release round the corner too. Also of relevance to administrators is that Oracle has increased the support lifetime for Fusion Middleware 11g (e.g. WebLogic 10.3.6) so that Premier Support will now run to the end of 2018 and Extended Support until 2021 - this should remove any Oracle-driven pressure to upgrade at least. Java Mission Control Java Mission Control (JMC) is the HotSpot Java 7 version of JRockit 6 Mission Control, a very nice performance monitoring tool from Oracle's BEA acquisition. Flight Recorder is a feature built into the JVM which records diagnostic events into, typically, a circular buffer which can then be used for historical analysis, particularly in the case of a JVM crash or hang. It's been available separately for WebLogic only for perhaps a year now but, more significantly, it now includes JVM events and was bundled in with JDK7 Update 40 a few weeks ago. I attended a couple of interesting Java One sessions on JMC/Flight Recorder and have to say it's looking really good - it has all the previous JRMC features except for memory leak detector, plus some enhancements around operative sets and ECID filtering I think. Marcus also showed how you could add your own events into flight recorder by building your own event class - they are then available for graphing alongside all the other events in JMC. This uses a currently an unsupported/undocumented API, but it's also the same one that WebLogic uses for WLDF events so I imagine it is stable. I'm not sure quite whether this would be useful to custom applications, as opposed to infrastructure services or ISV packaged applications, but it was a very nice demonstration. I've been testing JMC / FR enabling on several environments recently and my confidence is growing - it feels robust and I think could very soon be part of my standard builds. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OOW,Simon Haslam,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • Q&A: Oracle's Paul Needham on How to Defend Against Insider Attacks

    - by Troy Kitch
    Source: Database Insider Newsletter: The threat from insider attacks continues to grow. In fact, just since January 1, 2014, insider breaches have been reported by a major consumer bank, a major healthcare organization, and a range of state and local agencies, according to the Privacy Rights Clearinghouse.  We asked Paul Needham, Oracle senior director, product management, to shed light on the nature of these pernicious risks—and how organizations can best defend themselves against the threat from insider risks. Q. First, can you please define the term "insider" in this context? A. According to the CERT Insider Threat Center, a malicious insider is a current or former employee, contractor, or business partner who "has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization's information or information systems."  Q. What has changed with regard to insider risks? A. We are actually seeing the risk of privileged insiders growing. In the latest Independent Oracle Users Group Data Security Survey, the number of organizations that had not taken steps to prevent privileged user access to sensitive information had grown from 37 percent to 42 percent. Additionally, 63 percent of respondents say that insider attacks represent a medium-to-high risk—higher than any other category except human error (by an insider, I might add). Q. What are the dangers of this type of risk? A. Insiders tend to have special insight and access into the kinds of data that are especially sensitive. Breaches can result in long-term legal issues and financial penalties. They can also damage an organization's brand in a way that directly impacts its bottom line. Finally, there is the potential loss of intellectual property, which can have serious long-term consequences because of the loss of market advantage.  Q. How can organizations protect themselves against abuse of privileged access? A. Every organization has privileged users and that will always be the case. The questions are how much access should those users have to application data stored in the database, and how can that default access be controlled? Oracle Database Vault (See image) was designed specifically for this purpose and helps protect application data against unauthorized access.  Oracle Database Vault can be used to block default privileged user access from inside the database, as well as increase security controls on the application itself. Attacks can and do come from inside the organization, and they are just as likely to come from outside as attempts to exploit a privileged account.  Using Oracle Database Vault protection, boundaries can be placed around database schemas, objects, and roles, preventing privileged account access from being exploited by hackers and insiders.  A new Oracle Database Vault capability called privilege analysis identifies privileges and roles used at runtime, which can then be audited or revoked by the security administrators to reduce the attack surface and increase the security of applications overall.  For a more comprehensive look at controlling data access and restricting privileged data in Oracle Database, download Needham's new e-book, Securing Oracle Database 12c: A Technical Primer. 

    Read the article

  • Willy Rotstein on Supply Chain Planning

    - by sarah.taylor(at)oracle.com
    Each time a merchandiser, buyer or planner in Retail makes a business decision around assortment, inventory, pricing and promotions there is an opportunity to improve both Profitability and Customer Service. Improving decision making, however, has always been a tricky business for retailers.  I have worked in this space for more than 15 years. I began my career as an academic, at Imperial College London, and then broadened this interest with Retailers, aiming to optimize their merchandising and supply chain decisions. Planning the business and optimizing profit is a complex process. The complexity arises from the variety of people involved, the large number of decisions to take across all business processes, the uncertainty intrinsic to the retail environment as well as the volume of data available for analysis.  Things are not getting any easier either. The advent of multi-channel, social media and mobile is taking these complexities to a new level and presenting additional opportunities for those willing to exploit them. I guess it is due to the complexities of the decision making process that, over the last couple of years working with Oracle Retail, I have witnessed a clear trend around the deployment of planning systems. Retailers are aiming to simplify their decision making processes. They want to use one joined up planning platform across the business and enhance it with "actionable" data mining and optimization techniques. At Oracle Retail, we have a vibrant community of international retailers who regularly come together to discuss the big issues in retail planning. It is a combination of fashion, grocery and speciality retailers, all sharing their best practice vision for planning and optimizing merchandise decisions. As part of the Retail Exchange program, at the recent National Retail Federation event in New York, I jointly hosted a Planning dinner with Peter Fitzgerald from Google UK, Retail Division. Those retailers from our international planning community who were in New York for the annual NRF event were able to attend. The group comprised some of Europe's great International Retail brands.  All sectors were represented by organisations like Mango, LVMH, Ahold, Morrisons, Shop Direct and River Island. They confirmed the current importance of engaging with Planning and Optimization issues. In particular the impact of the internet was a key topic. We had a great debate about new retail initiatives.  Peter highlighted how mobility is changing retail - in particular with the new "local availability search" initiative. We also had an exciting discussion around the opportunities to improve merchandising using the new data that is becoming available from search, social media and ecommerce sites. It will be our focus to continue to help retailers translate this data into better results while keeping their business operations simple. New developments in "actionable" analytics and computing capacity make this a very exciting area today. Watch this space for my contributions on these topics which will be made available through this blog. Oracle Retail has a strong Planning community. if you are a category manager, a planner, a buyer, a merchandiser, a retail supplier or any retail executive with a keen interest in planning then you would be very welcome to join Oracle Retail's Planning Community. As part of our community you will be able to join our in-person and virtual events, download topical white papers and best practice information specifically tailored to your area of interest.  If anyone would like to register their interest in joining our community of retailers discussing planning then please contact me at [email protected]   Willy Rotstein, Oracle Retail

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • Thinktecture.IdentityModel: Comparing Strings without leaking Timinig Information

    - by Your DisplayName here!
    Paul Hill commented on a recent post where I was comparing HMACSHA256 signatures. In a nutshell his complaint was that I am leaking timing information while doing so – or in other words, my code returned faster with wrong (or partially wrong) signatures than with the correct signature. This can be potentially used for timing attacks like this one. I think he got a point here, especially in the era of cloud computing where you can potentially run attack code on the same physical machine as your target to do high resolution timing analysis (see here for an example). It turns out that it is not that easy to write a time-constant string comparer due to all sort of (unexpected) clever optimization mechanisms in the CLR. With the help and feedback of Paul and Shawn I came up with this: Structure the code in a way that the CLR will not try to optimize it In addition turn off optimization (just in case a future version will come up with new optimization methods) Add a random sleep when the comparison fails (using Shawn’s and Stephen’s nice Random wrapper for RNGCryptoServiceProvider). You can find the full code in the Thinktecture.IdentityModel download. [MethodImpl(MethodImplOptions.NoOptimization)] public static bool IsEqual(string s1, string s2) {     if (s1 == null && s2 == null)     {         return true;     }       if (s1 == null || s2 == null)     {         return false;     }       if (s1.Length != s2.Length)     {         return false;     }       var s1chars = s1.ToCharArray();     var s2chars = s2.ToCharArray();       int hits = 0;     for (int i = 0; i < s1.Length; i++)     {         if (s1chars[i].Equals(s2chars[i]))         {             hits += 2;         }         else         {             hits += 1;         }     }       bool same = (hits == s1.Length * 2);       if (!same)     {         var rnd = new CryptoRandom();         Thread.Sleep(rnd.Next(0, 10));     }       return same; }

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • Unclaimed user group prizes, Live meeting on Monday, Next weeks UG, SQLRelay and more prizes

    - by Testas
      Hi Everyone Firstly I want to let you know that I finally found the LINQ book prize winners and the list of people at the bottom of this email are owed a LINQ book. This will be given out at next week’s UG meeting Live meeting with Carolyn Chau, Program Manager at Microsoft on Monday! It is very rare that we get the opportunity to have a Live meeting with a Program Manager in Redmond. Carolyn Chau will be presenting PowerView next Monday at 8pm. Live meeting details can be found on http://sqlserverfaq.com/events/388/Live-Meeting-on-SQL-Server-2012-PowerView-with-Carolyn-Chau-Principal-Program-Manager-in-the-Reporting-Services-in-association-with-SQLPASS-SQLServerFAQ-and-SQLBits.aspx Next week’s UG!! We welcome Mark Broadbent to Manchester next week where he will be presenting his session on SQL Server 2012 on Windows Core. We also hand out the unclaimed prizes. Register at http://sqlserverfaq.com/events/369/Thursday-night-meeting-at-BSS-with-Chris-TestaONeill-and-Mark-Broadbent.aspx Chris Webb is in Manchester!!! Chris Webb will be speaking at the Manchester SQL Server UG on 4th July. He will also be running his Real World Cube Design and Performance Tuning with Analysis Services between the 3rd – 5th July. If you want to attend then you can sign up at the link below http://www.technitrain.com/coursedetail.php?c=13&trackingcode=FAQ SQLRelay and a Special Prize and Jamie Thomson comes to Manchester!!!! SQLRelay takes place in Manchester on the 22nd. We have a special guest, after years of asking Jamie Thomson is coming to Manchester. The SSIS Junkie will be gracing us with his presence with a talk on SSIS 2012. Also we have a prize. Know a friend or colleague who would benefit from SQLRelay? Get them to register at www.sqlserverfaq.com and then register for the event http://sqlserverfaq.com/events/373/ALL-DAY-TUESDAY-EVENT-12-hours-of-SQL-Server-2012-at-the-SQLRelay-meeting-at-the-COOP-Manchester.aspx Then send an email to [email protected] with the subject of SQLFriend with the name of your friend. If you are both at the SQLRelay event on the day and your names are pulled out of the hat you will win a PASS 2011 DVD and your friend will win the “Best of PASS DVD 2011” worth  $1000 courtesy of SQLPASS. The draw will take place between 4.30pm – 5pm on the day. SQLBits feedback!!!!! Attended SQLBits? We really need to know your opinion. Please fill out the survey for the days you attended If you attended any of the days at SQLBits please can you all fill out the following survey http://www.sqlbits.com/SQLBitsX If you attended the Thursday Training day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXThursday If you attended the Friday Deep Dives day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXFriday If you attended the Saturday Community day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXSaturday Thanks   Chris and Martin   LINQ BOOK winners Andrew Birds Chris Kennedy Dave Carpenter David Forrester Ian Ringrose James Cullen James Simpson Kevan Riley Kirsty Hunter Martin Bell Martin Croft Michael Docherty Naga Anand Ram Mangipudi Neal Atkinson Nick Colebourn Pavel Nefyodov Ralph Baines Rick Hibbert saad saleh Simon Enion Stan Venn Steve Powell Stuart Quinn

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >