Search Results

Search found 1622 results on 65 pages for 'aman deep gautam'.

Page 44/65 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • What is the start point in game development? Where to start?

    - by Dragon
    I understand, I'm not unique with such a question, there are a lot of questions like this one. But I hope you'll take a minute and maybe can give me a piece of advice. I have an idea to develop games, but I don't know where is the start point in game development. The learning curve isn't as straight as in learning of a programming language, but I want to give it a try. I have some experience with OOP and programming in general. I know (not too deep) C#, Java programming languages. I searched info on where to start, read a lot of blogs, forums and so on. Once I decided "stop wandering around, just start develop a game" and I started. At the moment I have a console version of very simple game (RPS - rock-paper-scissors) developed with C#. It has different modes: "player vs cpu" and "player vs player". Some time later I looked at the code and decided that it should be refactored or even redeveloped from the scratch. And I thought that time "GUI is what I need. I can add logic later." And now I'm here. I've already decided to make RPS with GUI, then make multiplayer and so on. I'm not thinking about 3D now, 2D is enough. It doesn't matter what language to use: C# or Java, I found frameworks for both - XNA (C#) and Slick (Java). Both are good for 2D game development. But I know nothing about sprites, how to bind objects on the screen and so on. You can say "you don't need it for such simple game like RPS", but RPS is the beginning, I have some ideas like "Tower Defense" game... you know, everybody has ideas, wishes.... and this knowledge is useful and in some way obligatory. So what is the start point to achieve my plans, ideas, wishes? Where to start? Is it possible to make game development learning curve a little bit straight? Or there're ways that amateur and game development beginners use for years? Thank you for you answers and advise in advance. P.S Sorry for that this post turned out an essay, but I tried to express my wish to start acting. Hope I managed to do it.

    Read the article

  • Game timings and formats

    - by topright
    There are more or less standardized TV-show/movie formats and recommended timings: 1. By the early 1960s, television companies commonly presented half-hour long "comedy" series, or one hour long "dramas." Half-hour series were mostly restricted to situation comedy or family comedy, and were usually aired with either a live or artificial laugh track. One hour dramas included genre series such as police and detective series, westerns, science fiction, and, later, serialized prime time soap operas. Programs today still overwhelmingly conform to these half-hour and one hour guidelines. Source 2. In the United States, most medical dramas are one hour long. Source 3. Traditionally serials were broadcast as fifteen minute installments each weekday in daytime slots. In 1956 As the World Turns debuted as the first half-hour soap opera. All soap operas broadcast half-hour episodes by the end of the 1960s. With increased popularity in the 1970s most soap operas expanded to an hour (Another World even expanded to ninety minutes for a short time). More than half of the serials had expanded to one hour episodes by 1980. As of 2010, six of the seven US serials air one hour episodes each weekday. Source Interesting. Are there any standards of timing in game development? Well, 5-20 minutes casual games, of course. There is even a "5-minutes-game" site. And 1-hour-gamer site. Are there 1-week, 1-year, 1-eternity game formats? Chess and Go - deep games that you can study all your life; but they are played in hour or several days (pro games). Addictive long-term online role-playing games (without win-condition) are played in monthes and, possibly, years. Replayability is an important factor to consider. It's good when game design document contains a line: "A game is designed for solving in X hours". How can it be measured before there is any prototype or demo? When you know your game format, you know your audience (and vice versa). It is practical question. Are there psychological researches about dynamic of gaming interest and involvement? And is there a correlation between game format and game genre?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-22

    - by Bob Rhubart
    Guide to integration architecture | Stephanie Mann "The landscape of integration architecture is shifting as service-oriented and cloud-based architecture take the fore," says Stephanie Mann. "To ensure success, enterprise architects and developers are turning to lighter-weight infrastructure to support more complex integration projects." FY13 Oracle PartnerNetwork Kickoff - Tues June 26, 2012 Join us for a one-hour live online event hosted by the Oracle PartnerNetwork team as we kickoff FY13. Other dates/times for EMEA/LAD/JAPAN/APAC. Click the link for details. Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6? | Ricardo Ferreira Okay, you would expect an Oracle guy to make this argument. But Ferreira takes a very deep, very detailed technical dive into the issue. So hear the man out, will ya? Hibernate4 and Coherence | Rene van Wijk According to Oracle ACE Rene van Wijk, "there are two ways to integrate Hibernate and Coherence." In this post he illustrates one of them. Simple Made Easy | Rich Hickey Rich Hickey discusses simplicity, why it is important, how to achieve it in design and how to recognize its absence in the tools, language constructs and libraries in this presentation from QCon London 2012. Starting a cluster | Mark Nelson Fusion Middleware A-Team blogger Mark Nelson looks at Oracle SOA Suite, Oracle BPM, and Oracle Coherence, three products that are " commonly clustered, and which have somewhat different requirements." Why building SaaS well means giving up your servers | GigaOM The biggest benefit to PaaS, reports GigaOM's Derrick Harris, "might be a better product because the company is able to focus on building the app rather than managing servers." Personas - what, why & how | Mascha van Oosterhout "To be able to create a successful, user-friendly website or application," says Mascha van Oosterhout, "every decision you take, whether you are part of the marketing team, the design team or the development team, should be based on what you know about the user." Thought for the Day "Machines take me by surprise with great frequency." — Alan Turing(June 23, 1912 - June 7, 1954) Source: Brainy Quote

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 13-19, 2013

    - by OTN ArchBeat
    The list below represents that Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 13-19, 2013, as determined by the clicks, likes, and other activities among the 4,425 fans of that page. Going Mobile with ADF – Implementing Data Caching and Syncing for Working Offline | Steven Davelaar Oracle Fusion Middleware A-Team solution architect Steven Davelaar takes you on a deep dive into how to use ADF Mobile to create an on-device application that supports working in offline mode. OOW 2013 Summary for Fusion Middleware Architects & Administrators | Simon Haslam Oracle ACE Director Simon Haslam shares a very thorough and detailed summary of the most interesting news coming out of Oracle OpenWorld 2013 for Fusion Middleware architects and administrators. Coherence Special Interest Group (SIG) – Sydney, October 24th If you're in the neighborhood... The Coherence Special Interest Group (SIG) in Sydney, Australia will be held on Thursday October 24th at the Park Hyatt Sydney, in The Rocks, between 9am and 5pm. The event will include presentations from customers, partners, and Coherence engineering team members and product managers. Click the link for more info. Free eBook: Oracle Multitenant for Dummies Oracle Multitenant for Dummies is a new e-book that provides a clear overview of the Oracle Database 12c multitenant architecture. It's free (registration required). Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 1: Introduction | Michael Rainey Michael Rainey launches a series of posts that guide you through "the architecture and setup for using GoldenGate with OBIA 11.1.1.7.1." Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Oracle ACE Director Lucas Jellema. Webgate Reverse Proxy Farm | Vinay Kalra Vinay Kalra's blog post discusses architecture and recommendations for centralizing Webgate deployments onto a server farm. Free Poster: Adaptive Case Management in Practice Thanks to Masons of SOA member Danilo Schmiedel for providing a hi-res copy of the Adaptive Case Management poster, now available for download from the OTN ArchBeat Blog. Should your team use a framework? | Sten Vesterli "Some developers have an aversion to frameworks, feeling that it will be faster to just write everything themselves," observes Oracle ACE Director Sten Vesterli. He explains why that's a very bad idea in this short post. Integrating Custom BPM Worklist into WebCenter Portal | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares a sample application configured to run a custom BPM Worklist, and shares steps describing how to configure and access it from the WebCenter Portal. Thought for the Day "Morning comes whether you set the alarm or not." — Ursula K. Le Guin (Born October 21, 1929) Source: brainyquote.com

    Read the article

  • The Social Business Thought Leaders

    - by kellsey.ruppel
    Enterprise Gamification, Big Data, Social Support, Total Customer Experience, Pull Organizations, Social Business. Are these purely the latest buzzwords to enter the market or significant trends that companies should keep an eye on? Oracle recently sponsored and presented at the 5th Social Business Forum, one of the largest European events on the use of social media as a business tool and accelerator. Through the participation of dozens of practitioners, experts and customer success stories, the conference demonstrated how a perfect storm of technology, management and cultural change is pushing peer-to-peer conversations deep into business processes. It is clear that Social Business is serving as a new propellant of agility, efficiency and reactivity. According to Deloitte and MIT what we have learned to call Social Business is considered important in the next 3 years by 86% of managers (see Social Business: What Are Companies Really Doing?, MIT Sloan Management Review and Deloitte). McKinsey further estimates the value that can be unlocked in terms of knowledge-worker productivity, consumer insights, product co-creation, improved sales, marketing and customer service up to $1300B (See The social economy: Unlocking value and productivity through social technologies, McKinsey Global Institute). This impacts any industry, with the strongest effects seen in Media & Entertainment, Technology, Telcos and Education. For those not able to attend the Social Business Forum and also for the many friends that joined us in Milan, we decided to keep the conversation going by extracting some golden nuggets from the perspective of five of the most well-known thought-leaders in this space. Starting this week you will have the chance to view: John Hagel (Author of the Power of Pull and Co-Chairman Center for the Edge at Deloitte & Touche) Christian Finn (Senior Director, WebCenter Evangelist at Oracle) Steve Denning (Author of The Radical Management and Independent Management Consulting Professional) Esteban Kolsky (Principal & Founder at ThinkJar) Ray Wang (Principal Analyst & CEO at Constellation Research) Stay tuned to hear: How pull organizations are addressing some of the deepest challenges impacting the market. How to integrate social into existing infrastructure and processes. How to apply radical management to become more agile and profitable. About the importance of gamification as an engagement lever. The first interview with John Hagel will be published tomorrow. Don't miss it and the entire series!

    Read the article

  • Detect, Analyze, Act – Fast!

    - by Ajay Khanna
    In fast changing business environment, it becomes crucial to identify business opportunities and business issues as soon as possible. If identified at the right time, business managers can address issues before they escalate to serious problems and can take advantage of the new opportunities before the competition does. Moreover, they have to be efficient to do this at the right cost. Success depends on how responsive organization is to emerging events and changing environment. These events can be customer issues, competition moves, changes in regulations, or changes in company policies. In order to be responsive in such situations, organizations need to first identify and track these situations. They can do that via business activity monitoring (BAM) and complex event processing (CEP). A unified monitoring dashboard helps put together a comprehensive picture of the situation in hand and provides deep insight to take proper actions. With CEP, businesses can connect all the relevant events, detect event patterns and take immediate actions using Business Process Management system.   So to be responsive we need: Real-Time Visibility with Business Activity Monitoring You can use BAM technology to monitor progress, track performance, meet service-level agreements (SLAs), manage exceptions, and issue alerts to an employee or application when a process is not functioning properly—all in real time. A unified monitoring dashboard helps you maintain a complete picture of each situation so you can take action effectively. BAM works hand in hand with BPM software to discover the significant activities that drive business success.   Real-Time Sense and Respond An event-driven BPM solution enables each step in a business process to be informed not only by the previous step, but also by any other step, data, and pattern of behavior deemed relevant to that step. This gives the company the ability to “sense and respond.” You can describe interesting event patterns and event correlations and monitor the business in real-time. Whenever a pre-defined pattern emerges you can take actions like raising alerts, notifications, or kicking off another business process. This synergy possible by integrating activity monitoring, event processing, and BPM makes it possible for managers to keep a finger on the pulse of their business. Business managers can now respond to customers faster, respond to competition faster, reduce fraud and do more cross-selling. Read more about being responsive in the whitepaper “The Instantly Responsive Enterprise: Integrating BPM and Complex Event Processing” in BPM Resource Kit.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • Simulating smooth movement along a line after calculating a collision containing a restitution of zero in 2D

    - by Casey
    [for tl;dr see after listing] //...Code to determine shapes types involved in collision here... //...Rectangle-Line collision detected. if(_rbTest->GetCollisionShape()->Intersects(*_ground->GetCollisionShape())) { //Convert incoming shape to a line. a2de::Line l(*dynamic_cast<a2de::Line*>(_ground->GetCollisionShape())); //Get line's normal. a2de::Vector2D normal_vector(l.GetSlope().GetY(), -l.GetSlope().GetX()); a2de::Vector2D::Normalize(normal_vector); //Accumulate forces involved. a2de::Vector2D intermediate_forces; a2de::Vector2D normal_force = normal_vector * _rbTest->GetMass() * _world->GetGravityHandler()->GetGravityValue(); intermediate_forces += normal_force; //Calculate final velocity: See [1] double Ma = _rbTest->GetMass(); a2de::Vector2D Ua = _rbTest->GetVelocity(); double Mb = _ground->GetMass(); a2de::Vector2D Ub = _ground->GetVelocity(); double mCr = Mb * _ground->GetRestitution(); a2de::Vector2D collision_velocity( ((Ma * Ua) + (Mb * Ub) + ((mCr * Ub) - (mCr * Ua))) / (Ma + Mb)); //Calculate reflection vector: See [2] a2de::Vector2D reflect_velocity( -collision_velocity + 2 * (a2de::Vector2D::DotProduct(collision_velocity, normal_vector)) * normal_vector ); //Affect velocity to account for restitution of colliding bodies. reflect_velocity *= (_ground->GetRestitution() * _rbTest->GetRestitution()); _rbTest->SetVelocity(reflect_velocity); //THE ULTIMATE ISSUE STEMS FROM THE FOLLOWING LINE: //Move object away from collision one pixel to prevent constant collision. _rbTest->SetPosition(_rbTest->GetPosition() + normal_vector); _rbTest->ApplyImpulse(intermediate_forces); } Sources: (1) Wikipedia: Coefficient of Restitution: Speeds after impact (2) Wikipedia: Specular Reflection: Direction of reflection First, I have a system in place to account for friction (that is, a coefficient of friction) but is not used right now (in addition, it is zero, which should not affect the math anyway). I'll deal with that after I get this working. Anyway, when the restitution of either object involved in the collision is zero the object stops as required, but if movement along the same direction (again, irrespective of the friction value that isn't used) as the line is attempted the object moves as if slogging through knee deep snow. If I remove the line of code in question and the object is not push away one pixel the object barely moves at all. All because the object collides, is stopped, is pushed up, collides, is stopped...etc. OR collides, is stopped, collides, is stopped, etc... TL;DR How do I only account for a collision ONCE for restitution purposes (BONUS: but CONTINUALLY for frictional purposes, to be implemented later)

    Read the article

  • MSCC: Global Windows Azure Bootcamp - 29th March 2014

    The Mauritius Software Craftsmanship Community proudly presents you the Global Windows Azure Bootcamp 2014 in Mauritius. Global Windows Azure Bootcamp 2014 in Mauritius - MSCC together with Microsoft, Ceridian and Emtel We are very happy and excited about our participation in this global event and would like to draw your attention to the official invitation letter below. Please sign up and RSVP on the official website of the MSCC. Participation is for free! Call for action Please create more awareness of this event in Mauritius and use the hash tag #gwabmru as well as the shortened link: http://aka.ms/gwabmru And remember: Sharing is Caring! Official invitation letter to the GWAB 2014 in Mauritius With over 130 confirmed locations around the globe, the Global Windows Azure Bootcamp is going to be a truly memorable event - and now here's your chance to take part! In April of 2013 we held the first Global Windows Azure Bootcamp at more than 90 locations around the globe! This year we want to again offer up a one day deep dive class to help thousands of people get up to speed on discovering Cloud Computing Applications for Windows Azure. In addition to this great learning opportunity the hands on labs will feature pooling a huge global compute farm to perform diabetes research! In Mauritius, the event will be organised by Microsoft Indian Ocean Islands & French Pacific in partnership with The Mauritius Software Craftsmanship Community (MSCC) and sponsored by Microsoft, Ceridian and Emtel. What do I need to bring?  You will need to bring your own computer which can run Visual Studio 2012 or 2013 (i.e. Windows, OSX, Ubuntu with virtualization, etc.) and have it preloaded with the following: Visual Studio 2012 or 2013 The Windows Azure SDK - http://www.windowsazure.com/en-us/develop/net/ Optionally (or if you will not be doing just .NET labs), the following can also be installed: Node.js SDK - http://www.windowsazure.com/en-us/develop/nodejs/ JAVA SDK - http://www.windowsazure.com/en-us/develop/java/ Doing mobile? Android? iOS? Windows Phone or Windows 8? - http://www.windowsazure.com/en-us/develop/mobile/ PHP - http://www.windowsazure.com/en-us/develop/php/ More info here: http://www.windowsazure.com/en-us/documentation Important: Please do the installation upfront as there will be very little time to troubleshoot installations during the day.  

    Read the article

  • 2D water with dynamic waves

    - by user1103457
    New Super Mario Bros has really cool 2D water that I'd like to learn how to create. Here's a video showing it. When something hits the water, it creates a wave. There are also constant "background" waves. You can get a good look at the constant waves just after 00:50 when the camera isn't moving. I assume the splashes in NSMB work as in the first part of this tutorial. But in NSMB the water also has constant waves on the surface, and the splashes look very different. Another difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. I am referring to the splashes that the player creates when jumping in and out of the water. How do they create the constant waves and the splashes? I am especially interested in the splashes, and how they work together with the constant waves. I am programming in XNA. I've tried this myself, but couldn't really get it all to work well together. Bonus questions: How do they create the light spots just under the surface of the waves and how do they texture the deeper parts of the water? This is the first time I try to create water like this. EDIT: I assume the constant waves are created using a sine function. The splashes are probably created in a way like in the tutorial. (But they are not the same, so I am still interested in how to make this kind of splashes) But I have a lot of trouble combining those things. I know I can use the sine function to set the height of a specific watercolumn but the splashes are using the speed, to determine the new height. I can't figure out how to combine those. Not that I am not asking how the developers of new super mario bros did this exactly. I am just interested in ways to recreate an effect like it. This week I have an examweek so I don't have time to work on the code. After this week I will spend a lot of time on it. But I am constantly thinking about it, so that's why I will be checking comments etc. I just won't be looking at the code since it might be too time-consuming.

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Styling Windows Phone Silverlight Applications

    - by Tim Murphy
    If you have not developed with styles in Silverlight/XAML then it can be challenging and resources can be sparse depending on how deep you get.  One thing that you need to understand is what level you can apply styles and how much they can cascade.  What I am finding is that this doesn’t go to the level that we are used to in HTML and CSS. While styles can be defined at a page level if you want to share styles throughout your application they should be defined in the App.xaml file.  This is of course analogous to placing a style in your HTML file versus an external CSS file.  This is the type of style I will concentrate on in this post. The first thing to look it how styles associate to elements.  TargetType defines the object type that your style will apply to.  In the example below the style is targeting the TextBlock object type. <Style x:Key="TextBlockSmallGray" TargetType="TextBlock"> Next we use a Setter which allows you to apply values for specific attributes of the target object type.  The setters can be a simple value or complex.  The first example here is simply applying a color to the background property of the target. <Setter Property="Background" Value="White"/> The second setter example here is for the same property, but we are applying a the definition of a LinearGradientBrush. <Setter Property="Background"> <Setter.Value> <LinearGradientBrush> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset="1" Color="White"/> </LinearGradientBrush> </Setter.Value> </Setter> The last thing I want to cover here is that you can leverage the system styles and then override or extend them.  The BasedOn attribute of the Style tag allows this sort of inheritance.  In the example below I am going to start with the PhoneTextTitleStyle and then override properties as needed. <Style x:Key="TextBlockTitle" BasedOn="{StaticResource PhoneTextTitle1Style}" TargetType="TextBlock"> So now that we have our styles defined applying it is fairly straight forward.  Add the style name as a static resource to the style property of the element in your page and off you go. <Grid x:Name="LayoutRoot" Style="{StaticResource PageGridStyle}"> So this is one step in creating consistency in your application’s look.  In future posts I will dig a little deeper. del.icio.us Tags: windows phone 7,mobile development,windows phone 7 development,.NET,software development,design,UX

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >