Search Results

Search found 6235 results on 250 pages for 'hdd low'.

Page 230/250 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • Recent Innovations to ILOM

    - by B.Koch
    by Josh Rosen If you are wondering how Oracle can make some of the most advanced, reliable, and fault tolerant servers on the market, look no further than Oracle Integrated Lights Out Manager or ILOM.  We build ILOM into every server we create, from Oracle x86 Systems such as X3-2 to the SPARC T-Series family. Oracle ILOM is an embedded service processor, but it's really more than that.  It's a computer within a computer.  It's smart, it's tightly integrated into all aspects of the server's operation, and it's a big reason why Oracle servers are used for some of the most mission-critical workloads out there. To understand the value of ILOM, there is no better place to start than its fault management capability.  We have taken the sophisticated fault management architecture from Solaris, developed and refined over a decade, and built it into each and every ILOM. ILOM detects a potential issue at its earliest stage, watching low-level sensors.   If the root cause of a problem is not clear from a single error reading, ILOM will look for other clues and combine multiple pieces of information to correctly identify a failing component. ILOM provides peace of mind. We tailor our fault management for each new server platform that we produce.  You can rest assured that it's always actively keeping the server healthy.  And if there is a problem, you can be confident it will let you know by sending you a notification by e-mail or trap. We also heard IT managers tell us they needed a Ph.D. in computer engineering to manage today's servers. It doesn't have to be that way.  Thanks to the latest innovations to Oracle ILOM, we present hardware inventory and status in way that makes sense – to anyone.  Green means everything is healthy and red means something is wrong.  When a component needs to be replaced a clear message indicates where the problem is and points you at a knowledge article about that problem.  It's that simple. Simpler management and simple interfaces mean reduced complexity and lower costs to manage.  And we know that's really important. ILOM does all this while also providing advanced service processor features you depend on for managing enterprise class systems.  You can remotely control the server power, interact with a virtual video console for the server, and mount media on the server remotely.  There is no need to spend money on a KVM switch to get this functionality. And when people hear how advanced ILOM is, they can't believe ILOM is free.  All features are enabled and included with each Oracle server that you buy.  There are no advanced licenses you need to purchase or features to unlock. Configuring ILOM has also never been easier.  It is now possible to configure almost all aspects of the server directly from ILOM.  This includes changing BIOS settings, persistently modifying boot order, and optimizing power settings -- all directly from ILOM. But Oracle's innovation does not stop with ILOM.  Oracle has engineered Oracle Enterprise Manager Ops Center to integrate directly with ILOM, providing centralized management across all of our servers. Ops Center will discover each of your Oracle servers over the network by searching for ILOMs.  When it finds one, it knows how to communicate with ILOM to monitoring and configure that server from application to disk. Since every server that Oracle produces, from x86 Systems to SPARC T-Series up and down the line, comes with Oracle ILOM, you can manage all Oracle servers in the same way.  And while all of our servers may have different components on the inside, each with their specialized functions, the way you integrate them and the way you monitor and manage them is exactly the same. Oracle ILOM is state-of-art.  If you are looking for a server that make systems management simple and is easy to integrate and maintain, check out the latest advances to Oracle ILOM. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • OTN Architect Day Headed to Reston, VA - May 16

    - by Bob Rhubart
    In 2011 OTN Architect Day made stops in Chicago, Denver, Phoenix, Redwood Shores, and Toronto. The 2012 series begins with OTN Architect Day in Reston, VA on Wednesday May 16. Registration is now open for this free event, but don't get caught napping -- seating is limited, and the event is just 5 weeks away. The information below reflects the most recent updates to the event agenda, including the addition of Oracle ACE Director Kai Yu as the guest keynote speaker. Kai is Senior System Engineer / Architect at Dell, Inc., and has been very busy of late as a speaker at various industry and Oracle User Group events. I'm very happy Kai has agreed to make the trek from his hometown in Austin, TX to share his insight at the Architect Day event in Reston.  If you're in the area, put this one on your calendar. You won't be sorry.   Venue Sheraton Reston Hotel 11810 Sunrise Valley Drive Reston, VA 20191 Event Agenda 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossman Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. 10:00 am - 10:30 am High Availability Infrastructure for Cloud Computing | Kai Yu Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and the related best practices that system architects should consider in cloud infrastructure design to ensure high availability. 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions: (pick one) Innovations in Grid Computing with Oracle Coherence | Bjorn Boe Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. 11:30 pm - 12:15 pm Breakout Sessions: (pick one) Oracle Enterprise Manager | Joe Diemer Oracle Enterprise Manager (EM) provides complete lifecycle management for the cloud - from automated cloud setup to self-service delivery to cloud operations. In this session you'll learn how to take control of your cloud infrastructure with EM features including Consolidation Planning and Self-Service provisioning with Metering and Chargeback. Come hear how Oracle is expanding its management capabilities into the cloud! Rationalization and Defense in Depth - Two Steps Closer to the Clouds | Dave Chappelle Security represents one of the biggest concerns about cloud computing. In this session we'll get past the FUD with a real-world look at some key issues. We'll discuss the infrastructure necessary to support rationalization and security services, explore architecture for defense -in-depth, and deal frankly with the good, the bad, and the ugly in Cloud security. 12:15 pm - 1:15 pm Lunch 1:40 pm - 2:00 pm Panel Discussion - Q&A 2:00 pm - 2:45 pm Breakout Sessions: (pick one) 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Track B: Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation gives an overview of Oracle's Cloud Reference Architecture, which is part of the Cloud Enterprise Technology Strategy (ETS). Concepts covered include common management layer capabilities, service models, resource pools, and use cases. 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussions 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtable 4:15 pm - 5:00 pm Cocktail Reception / Networking Session schedule and content subject to change.

    Read the article

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • DVD not detected?

    - by Benjamin
    As i insert a DVD in my drive (on a laptop) using Kubuntu 12.04, the DVD is not detected. The drive takes the DVD, I can hear it munching on it for a while, and then nothing. Even ejecting the DVD becomes a hassle, I need to do that at boot time otherwise the OS won't allow me to eject. Kubuntu 12.04 used to read and mount the same DVDs just fine a week ago. Edit: since I am able to boot from CD, can I safely assume the issue is not a hardware issue? How can I fix this? wodim --devices returns: wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg1' rwrw-- : 'Optiarc' 'DVD+-RW AD-7640A' ------------------------------------------------------------------------- sudo lshw -class disk returns: *-cdrom description: DVD-RAM writer product: DVD+-RW AD-7640A vendor: Optiarc physical id: 0.0.0 bus info: scsi@3:0.0.0 logical name: /dev/cdrom2 logical name: /dev/cdrw2 logical name: /dev/dvd2 logical name: /dev/dvdrw2 logical name: /dev/sr0 version: JD06 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=open lspci returns: 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (secondary) (rev 0c) 00:1a.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3) 00:1f.0 ISA bridge: Intel Corporation 82801HM (ICH8M) LPC Interface Controller (rev 03) 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03) 06:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) 08:05.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 08:05.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02) 08:05.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) and lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 0c45:63e0 Microdia Sonix Integrated Webcam Bus 003 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader The /etc/modprobe.d/blacklist.conf contains: # This file lists those modules which we don't want to be loaded by # alias expansion, usually so some other driver will be loaded for the # device instead. # evbug is a debug tool that should be loaded explicitly blacklist evbug # these drivers are very simple, the HID drivers are usually preferred blacklist usbmouse blacklist usbkbd # replaced by e100 blacklist eepro100 # replaced by tulip blacklist de4x5 # causes no end of confusion by creating unexpected network interfaces blacklist eth1394 # snd_intel8x0m can interfere with snd_intel8x0, doesn't seem to support much # hardware on its own (Ubuntu bug #2011, #6810) blacklist snd_intel8x0m # Conflicts with dvb driver (which is better for handling this device) blacklist snd_aw2 # causes failure to suspend on HP compaq nc6000 (Ubuntu: #10306) blacklist i2c_i801 # replaced by p54pci blacklist prism54 # replaced by b43 and ssb. blacklist bcm43xx # most apps now use garmin usb driver directly (Ubuntu: #114565) blacklist garmin_gps # replaced by asus-laptop (Ubuntu: #184721) blacklist asus_acpi # low-quality, just noise when being used for sound playback, causes # hangs at desktop session start (Ubuntu: #246969) blacklist snd_pcsp # ugly and loud noise, getting on everyone's nerves; this should be done by a # nice pulseaudio bing (Ubuntu: #77010) blacklist pcspkr # EDAC driver for amd76x clashes with the agp driver preventing the aperture # from being initialised (Ubuntu: #297750). Blacklist so that the driver # continues to build and is installable for the few cases where its # really needed. blacklist amd76x_edac

    Read the article

  • Bring Office 2003 Menus Back to 2010 with UBitMenu

    - by Matthew Guay
    Are you having trouble getting used to the Ribbon interface in Office 2010?  Here’s how you can roll back the clock a bit and bring back the familiar menus and toolbars from 2003. The Office 2007 Ribbon was both praised and criticized.  While many users felt they were more productive with the new interface, others felt frustrated searching for commands they had memorized in older versions of Office.  Now, with Office 2010, the ribbon interface has been brought to every app in the Office suite, and is integrated into many newer programs from Microsoft. If you’re moving from Office 2003, using UBitMenu allows you to add the old familiar menus back along with the new Ribbon interface for an easier learning curve. Also, with the customizability of Office 2010, we can strip away the extra Ribbon tabs to make it more like 2003. Get the 2003 Menus and Toolbars Back in Office 2010 Download UBitMenu (link below), and install as normal.  Make sure all of your Office programs are closed during the installation.  This handy utility is very small, and installed amazingly quick. Open Word, Excel, or PowerPoint and there’s now a new Menu tab beside Home in the Ribbon.  Now you can access all of your favorite old Office commands in the familiar menus, and access many of the newer Office features such as SmartArt.   Here’s a close-up of the toolbar.  Notice that the layout is very similar to that of Word 2003. You can access all of the new Transitions in PowerPoint 2010 from the menu bar.   The menu in Excel even included support for the new PivotTable and PivotCharts Wizard. One problem we noticed was that the toolbars were condensed to a drop-down menu if the Office window was less than 870px wide.  This may be a frustration to users with low-resolution displays, and you might want to use the Office Apps maximized. Get Rid of the Ribbon Now that you’ve got the old menus back, you can get rid of the extra ribbon tabs if you’d like.  Office 2010 lets you customize your ribbon and remove tabs, so let’s get rid of all the other tabs except for our new Menu tab.  In our example we’re using Word, but you can do it in Excel or PowerPoint the same way. Click the File tab and select Options. Alternately, in the Menu tab, select Tools and then Word Options. Select Customize Ribbon on the left sidebar, then uncheck the boxes beside all the ribbon tabs you want to hide on the right.  Click Ok when you’re finished. While you’re at it, you can change the default color scheme as well. Note: The color change will automatically change the color scheme in all of the Office apps, so you’ll only need to do that once. Now the ribbon only has 2 tabs…the File tab for the new Backstage View, and the UBitMenu tab we just installed.  It almost has the appearance Word 2003, but with the new features of Word 2010!  You’ll need to repeat these steps in Excel and PowerPoint if you want to customize their ribbon the same.   Conclusion If you’ve been having a hard time getting used to Office 2010, UBitMenu is a great way to get familiar with the new interface, or simply stay productive with your old tricks.  We do wish it supported the other Office applications like OneNote and Outlook. That doesn’t make it a deal breaker though, it can make the learning curve easier in Word, PowerPoint, and Excel. UBitMenu is free for personal use, and available at a very reasonable price for businesses. If you’re using Office 2007 and not a fan of the Ribbon, UBitMenu works for it as well. Download UBitMenu Similar Articles Productive Geek Tips How To Bring Back the Old Menus in Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideHow to Find Office 2003 Commands in Office 2010Make Word 2007 Always Save in Word 2003 FormatMake Excel 2007 Always Save in Excel 2003 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Vacations on Rodrigues 2014

    And now something completely different compared to the usual technical or community related articles here on this blog. Yes, this time I'm writing some lines on my (and my family's) activities during our long weekend stay on Rodrigues. So, please bear with me, it's eventually a bit more personal... Grab a soda, some popcorn and a cosy place to continue to read. var googleAlbumLink = "https://plus.google.com/photos/117698191428446859536/albums/6047895311458281985"; //optional----------------------- var mySlideWidth = 580; var mySlideHeight = 340; var mySlideDelay = 7000; //delay in milliseconds Special promotions during school holidays Originally, our children started to ask more frequently about going on the plane again. Obviously, after their aunty from Germany was around during May, they were really eager to travel again. So, we decided that it might be a great opportunity to book some vacations during their school holidays. And just in time the local hotels and hotel groups started to advertise their special promotions for citizens and residents. After collecting multiple brochures over several days, we got attracted by various hotel packages on Rodrigues - most interestingly the expenses for the stay and flight ticket were less compared to other resorts here on the main island. As we have been to Rodrigues already back in 2008, we followed up on this idea and got in touch with a couple travel agencies. Well, I have to report that you should be really careful about the promotions from some of them. We had a very negative experience with Shamal Travel Agency in Quatre Bornes regarding their adverts and the actual price levels and age definition for children. Please, stay away from them if you are interested in transparent cost and services. Anyway, after some arrangements with two other close families we managed to confirm our stay at the Cotton Bay Hotel in Rodrigues. Given the fact that we already stayed there, and the hotel has been renovated recently, and it is under new management all looked very promising and relaxed for our vacation. Counting the days... As we already booked in July our children were counting down the days. And it got more interesting as soon as they were on school holidays finally. Well, the day arrived and waking them up at 2:30 hrs wasn't a problem after all. Quite the opposite it was fascinating for us parents to watch them waiting for the transport and later on during the airport transfer. Despite the early hours both didn't fall asleep and it was all so exciting. We are taking the plane! Well organised by the Cotton Bay Hotel Honestly, it was a breeze and a smooth ride during our stay at the hotel. From the airport transfer, the cleanliness of our bungalow, the organisation of our day trips, and the SPA - all very well and enjoyable. The children had great fun, and although it was a bit too windy to plunge into the pool they had a lot of fun with other activities on the beach and at the Kid's Club. Oh, and we had our private petting zoo with cows, sheep and goats just close to the terrace. Some of us went to check out the SPA facilities and I have to admit that the services regarding Hammam and Sauna are better than at some other hotels in Mauritius. I don't know after how many months or years I was once again enjoying a very hot sauna. Little draw-back but nothing to worry about... There is no cold water or at least ice cubes to cool down the body, but hey there was a nice breeze coming over the hills. Some day trips to mention Based on a friend's recommendation we walked to a "restaurant" called Chez Solange & Robert. Hahaha, restaurant is widely stretched in this case, as we enjoyed a great BBQ with fresh lobster, whole fish, and pieces of chicken breast in an open cottage. Just some wooden structure covered with dried palm leaves on the roof - island feeling pure! The other day we went to the Giant Tortoise & Cave Reserve Francois Leguat to observe the giant Aldabra turtles and to visit the Grande Caverne. The biggest limestone cave on the island. Compared to our last visit this was a novelty after checking out the Caverne Partate. The formations of stalactites and stalagmites are very impressive and imaginative. Our guide had lots of funny terms and despite the low light conditions the kids had a great time wandering around on the narrow wooden paths and stairs. And last but not least, we decided to check out the Tyrodrig zip lines... Everyone was allowed to join the trip through the air, and our little ones stayed close to our field guides. But finally went on their own on the very last traversal. Puuuh, it was astounishing to glide over the valley, and for sure something to repeat next time. Impressions of our vacation on Rodrigues 2014   Next stay has been discussed already Oh yes, Rodrigues baby! We are going to come again! Tentative dates have been discussed already and now it's up to us to earn enough our next holiday on that wonderful remote piece of paradise. Eventually, a little bit longer than this time. We'll see...

    Read the article

  • The Unspoken - The Why of GC Ergonomics

    - by jonthecollector
    Do you use GC ergonomics, -XX:+UseAdaptiveSizePolicy, with the UseParallelGC collector? The jist of GC ergonomics for that collector is that it tries to grow or shrink the heap to meet a specified goal. The goals that you can choose are maximum pause time and/or throughput. Don't get too excited there. I'm speaking about UseParallelGC (the throughput collector) so there are definite limits to what pause goals can be achieved. When you say out loud "I don't care about pause times, give me the best throughput I can get" and then say to yourself "Well, maybe 10 seconds really is too long", then think about a pause time goal. By default there is no pause time goal and the throughput goal is high (98% of the time doing application work and 2% of the time doing GC work). You can get more details on this in my very first blog. GC ergonomics The UseG1GC has its own version of GC ergonomics, but I'll be talking only about the UseParallelGC version. If you use this option and wanted to know what it (GC ergonomics) was thinking, try -XX:AdaptiveSizePolicyOutputInterval=1 This will print out information every i-th GC (above i is 1) about what the GC ergonomics to trying to do. For example, UseAdaptiveSizePolicy actions to meet *** throughput goal *** GC overhead (%) Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) Tenuring threshold: (attempted to decrease to balance GC costs) = 1 GC ergonomics tries to meet (in order) Pause time goal Throughput goal Minimum footprint The first line says that it's trying to meet the throughput goal. UseAdaptiveSizePolicy actions to meet *** throughput goal *** This run has the default pause time goal (i.e., no pause time goal) so it is trying to reach a 98% throughput. The lines Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) say that we're currently spending about 16% of the time doing young GC's and about 5% of the time doing full GC's. These percentages are a decaying, weighted average (earlier contributions to the average are given less weight). The source code is available as part of the OpenJDK so you can take a look at it if you want the exact definition. GC ergonomics is trying to increase the throughput by growing the heap (so says the "attempted to grow"). The last line Tenuring threshold: (attempted to decrease to balance GC costs) = 1 says that the ergonomics is trying to balance the GC times between young GC's and full GC's by decreasing the tenuring threshold. During a young collection the younger objects are copied to the survivor spaces while the older objects are copied to the tenured generation. Younger and older are defined by the tenuring threshold. If the tenuring threshold hold is 4, an object that has survived fewer than 4 young collections (and has remained in the young generation by being copied to the part of the young generation called a survivor space) it is younger and copied again to a survivor space. If it has survived 4 or more young collections, it is older and gets copied to the tenured generation. A lower tenuring threshold moves objects more eagerly to the tenured generation and, conversely a higher tenuring threshold keeps copying objects between survivor spaces longer. The tenuring threshold varies dynamically with the UseParallelGC collector. That is different than our other collectors which have a static tenuring threshold. GC ergonomics tries to balance the amount of work done by the young GC's and the full GC's by varying the tenuring threshold. Want more work done in the young GC's? Keep objects longer in the survivor spaces by increasing the tenuring threshold. This is an example of the output when GC ergonomics is trying to achieve a pause time goal UseAdaptiveSizePolicy actions to meet *** pause time goal *** GC overhead (%) Young generation: 20.74 (no change) Tenured generation: 31.70 (attempted to shrink) The pause goal was set at 50 millisecs and the last GC was 0.415: [Full GC (Ergonomics) [PSYoungGen: 2048K-0K(26624K)] [ParOldGen: 26095K-9711K(28992K)] 28143K-9711K(55616K), [Metaspace: 1719K-1719K(2473K/6528K)], 0.0758940 secs] [Times: user=0.28 sys=0.00, real=0.08 secs] The full collection took about 76 millisecs so GC ergonomics wants to shrink the tenured generation to reduce that pause time. The previous young GC was 0.346: [GC (Allocation Failure) [PSYoungGen: 26624K-2048K(26624K)] 40547K-22223K(56768K), 0.0136501 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] so the pause time there was about 14 millisecs so no changes are needed. If trying to meet a pause time goal, the generations are typically shrunk. With a pause time goal in play, watch the GC overhead numbers and you will usually see the cost of setting a pause time goal (i.e., throughput goes down). If the pause goal is too low, you won't achieve your pause time goal and you will spend all your time doing GC. GC ergonomics is meant to be simple because it is meant to be used by anyone. It was not meant to be mysterious and so this output was added. If you don't like what GC ergonomics is doing, you can turn it off with -XX:-UseAdaptiveSizePolicy, but be pre-warned that you have to manage the size of the generations explicitly. If UseAdaptiveSizePolicy is turned off, the heap does not grow. The size of the heap (and the generations) at the start of execution is always the size of the heap. I don't like that and tried to fix it once (with some help from an OpenJDK contributor) but it unfortunately never made it out the door. I still have hope though. Just a side note. With the default throughput goal of 98% the heap often grows to it's maximum value and stays there. Definitely reduce the throughput goal if footprint is important. Start with -XX:GCTimeRatio=4 for a more modest throughput goal (%20 of the time spent in GC). A higher value means a smaller amount of time in GC (as the throughput goal).

    Read the article

  • Dynamically switching the theme in Orchard

    - by Bertrand Le Roy
    It may sound a little puzzling at first, but in Orchard CMS, more than one theme can be active at any given time. The reason for that is that we have an extensibility point that allows a module (or a theme) to participate in the choice of the theme to use, for each request. The motivation for building the theme engine this way was to enable developers to switch themes based on arbitrary criteria, such as user preferences or the user agent (if you want to serve a mobile theme for phones for example). The choice is made between the active themes, which is why there is a difference between the default theme and the active themes. In order to have a say in the choice of the theme, all you have to do is implement IThemeSelector. That interface is quite simple as it only has one method, GetTheme, that takes the current RequestContext and returns a ThemeSelectorResult or null if the implementation of the interface does not want to participate in the current request (we'll see an example in a moment). ThemeSelectorResult itself is just a ThemeName string property and an integer Priority. We're using a priority so that an arbitrary number of implementations of IThemeSelector can contribute to the choice of a theme. If you look for existing implementations of the interface in Orchard, you'll find four: AdminThemeSelector: selects the TheAdmin theme with a very high priority (100) if the current request is for a page that is part of the admin. Otherwise, null is returned, which enables other implementations to choose the theme. PreviewThemeSelector: selects the preview theme if there is one, with a high priority (90), and null otherwise. This enables administrators to view the site under a different theme while everybody else continues to see the current default theme. SiteThemeSelector: this is the implementation that is doing what you expect most of the time, which is to get the current theme from site settings and set it with a priority of –5. SafeModeThemeSelector: this is the fallback implementation, which should almost never win. It sets the theme as the safe mode theme, which has no style and just uses the default templates for everything. The priority is very low (-100). While this extensibility mechanism is great to have, I wanted to bring that level of choice into the hands of the site administrator rather than just developers. In order to achieve that, I built the Vandelay Theme Picker module. The module provides administration UI to create rules for theme selection. It provides its own extensibility point (the IThemeSelectionRule interface) and one implementation of a rule: UserAgentThemeSelectorRule. This rule gets the current user agent from the context and tries to match it with a regular expression that the administrator can configure in the admin UI. You can for example configure a rule with a regular expression that matches IE6 and serve a different subtheme where the stylesheet has been tweaked for such an antique browser. Another possible configuration is to detect mobile devices from their agent string and serve the mobile theme. All those operations can be done with this module entirely from the admin UI, without writing a line of code. The module also offers the administrator the opportunity to inject a link into the front-end in a specific zone and with a specific position that enables the user to switch to the default theme if he wishes to. This is especially useful for sites that use a mobile theme but still want to allow users to use the full desktop site. While the module is nice and flexible, it may be overkill. On my own personal blog, I have only two active themes: the desktop theme and the mobile theme. I'm fine with going into code to change the criteria on which to switch the theme, so I'm not using my own Theme Picker module. Instead, I made the mobile theme a theme with code (in other words there is a csproj file in the theme). The project includes a single C# file, my MobileThemeSelector for which the code is the following: public class MobileThemeSelector : IThemeSelector { private static readonly Regex _Msie678 = new Regex(@"^Mozilla\/4\.0 \(compatible; MSIE [678]" + @"\.0; Windows NT \d\.\d(.*)\)$", RegexOptions.IgnoreCase); private ThemeSelectorResult _requestCache; private bool _requestCached; public ThemeSelectorResult GetTheme(RequestContext context) { if (_requestCached) return _requestCache; _requestCached = true; var userAgent = context.HttpContext.Request.UserAgent; if (userAgent.IndexOf("phone", StringComparison.OrdinalIgnoreCase) != -1 || _Msie678.IsMatch(userAgent) || userAgent.IndexOf("windows live writer", StringComparison.OrdinalIgnoreCase) != -1) { _requestCache = new ThemeSelectorResult { Priority = 10, ThemeName = "VuLuMobile" }; } return _requestCache; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The theme selector selects the current theme for Internet Explorer versions 6 to 8, for phones, and for Windows Live Writer (so that the theme that is used when I write posts is as simple as possible). What's interesting here is that it's the theme that selects itself here, based on its own criteria. This should give you a good panorama of what's possible in terms of dynamic theme selection in Orchard. I hope you find some fun uses for it. As usual, I can't wait to see what you're going to come up with…

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Streaming desktop with avconv - severe sound issues

    - by Tommy Brunn
    I'm trying to do some live streaming in Ubuntu 12.10, but I'm having some problems with audio. More specifically, the quality is complete garbage and it's at least 10 seconds out of sync with the video. I'm using an excellent guide found here to set up my loopback devices so that I can combine the desktop audio with the microphone input. It seems to work, as I'm able to stream both audio and video to Twitch.tv. But, as I said, the audio quality is terrible. The microphone audio is very, very low, but if I increase it, I get a horrible garbled sound that is absolutely unbearable. Nothing like that is present during VoIP calls or when recording sound alone with the sound recorder, so it's not an issue with the microphone itself. The entire audio stream is also delayed about 10-15 seconds compared to the video stream. I put together an imgur album of my settings. Here is some example output from when I'm streaming: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [x11grab @ 0x162fd80] device: :0.0+570,262 -> display: :0.0 x: 570 y: 262 width: 1280 height: 720 [x11grab @ 0x162fd80] shared memory extension found [x11grab @ 0x162fd80] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0+570,262': Duration: N/A, start: 1353181686.735113, bitrate: 884736 kb/s Stream #0.0: Video: rawvideo, bgra, 1280x720, 884736 kb/s, 30 tbr, 1000k tbn, 30 tbc [alsa @ 0x163fce0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x163fce0] Estimating duration from bitrate, this may be inaccurate Input #1, alsa, from 'pulse': Duration: N/A, start: 1353181686.773841, bitrate: N/A Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' [buffer @ 0x1641ec0] w:1280 h:720 pixfmt:bgra [scale @ 0x1642480] w:1280 h:720 fmt:bgra -> w:852 h:480 fmt:yuv420p flags:0x4 [libx264 @ 0x165ae80] VBV maxrate unspecified, assuming CBR [libx264 @ 0x165ae80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x165ae80] profile Main, level 3.1 [libx264 @ 0x165ae80] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=30 rc=cbr mbtree=1 bitrate=712 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=712 vbv_bufsize=512 nal_hrd=none ip_ratio=1.25 aq=1:1.00 Output #0, flv, to 'rtmp://live.justin.tv/app/live_23011330_Pt1plSRM0z5WVNJ0QmCHvTPmpUnfC4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuv420p, 852x480, q=-1--1, 712 kb/s, 1k tbn, 30 tbc Stream #0.1: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 712 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 32 fps= 31 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 40 fps= 23 q=29.0 size= 44kB time=0.03 bitrate=13786.2kbits/s dup=frame= 47 fps= 21 q=31.0 size= 93kB time=2.73 bitrate= 277.7kbits/s dup=0frame= 62 fps= 23 q=29.0 size= 160kB time=3.23 bitrate= 406.2kbits/s dup=0frame= 77 fps= 24 q=23.0 size= 209kB time=3.71 bitrate= 462.5kbits/s dup=0frame= 92 fps= 25 q=20.0 size= 267kB time=4.91 bitrate= 445.2kbits/s dup=0frame= 107 fps= 25 q=20.0 size= 318kB time=5.41 bitrate= 482.1kbits/s dup=0frame= 123 fps= 26 q=18.0 size= 368kB time=5.96 bitrate= 505.7kbits/s dup=0frame= 139 fps= 26 q=16.0 size= 419kB time=6.48 bitrate= 529.7kbits/s dup=0frame= 155 fps= 27 q=15.0 size= 473kB time=7.00 bitrate= 553.6kbits/s dup=0frame= 170 fps= 27 q=14.0 size= 525kB time=7.52 bitrate= 571.7kbits/s dup=0 frame= 180 fps= 25 q=-1.0 Lsize= 652kB time=7.97 bitrate= 670.0kbits/s dup=0 drop=32 //Here I stop the streaming video:531kB audio:112kB global headers:0kB muxing overhead 1.345945% [libx264 @ 0x165ae80] frame I:1 Avg QP:30.43 size: 39748 [libx264 @ 0x165ae80] frame P:45 Avg QP:11.37 size: 11110 [libx264 @ 0x165ae80] frame B:134 Avg QP:15.93 size: 27 [libx264 @ 0x165ae80] consecutive B-frames: 0.6% 0.0% 1.7% 97.8% [libx264 @ 0x165ae80] mb I I16..4: 7.3% 0.0% 92.7% [libx264 @ 0x165ae80] mb P I16..4: 0.1% 0.0% 0.1% P16..4: 49.1% 1.2% 2.1% 0.0% 0.0% skip:47.4% [libx264 @ 0x165ae80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:99.9% L0:42.5% L1:56.9% BI: 0.6% [libx264 @ 0x165ae80] coded y,uvDC,uvAC intra: 82.3% 87.4% 71.9% inter: 7.1% 8.4% 7.0% [libx264 @ 0x165ae80] i16 v,h,dc,p: 27% 29% 16% 28% [libx264 @ 0x165ae80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 21% 14% 8% 8% 8% 7% 5% 7% [libx264 @ 0x165ae80] i8c dc,h,v,p: 47% 22% 20% 11% [libx264 @ 0x165ae80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x165ae80] ref P L0: 96.4% 3.6% [libx264 @ 0x165ae80] kb/s:474.19 Received signal 2: terminating. Any ideas on how I can resolve this? The video delay is perfectly acceptable, so I wouldn't think that it's a network issue that's causing the delay in the audio. Any help would be appreciated.

    Read the article

  • JQuery + WCF + HTTP 404 Error

    - by hangar18
    HI All, I've searched high and low and finally decided to post a query here. I'm writing a very basic HTML page from which I'm trying to call a WCF service using jQuery and parse it using JSON. Service: IMyDemo.cs [ServiceContract] public interface IMyDemo { [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee DoWork(); [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee GetEmp(int age, string name); } [DataContract] public class Employee { [DataMember] public int EmpId { get; set; } [DataMember] public string EmpName { get; set; } [DataMember] public int EmpSalary { get; set; } } MyDemo.svc.cs public Employee DoWork() { // Add your operation implementation here Employee obj = new Employee() { EmpSalary = 12, EmpName = "SomeName" }; return obj; } public Employee GetEmp(int age, string name) { Employee emp = new Employee(); if (age > 0) emp.EmpSalary = 12 + age; if (!string.IsNullOrEmpty(name)) emp.EmpName = "Server" + name; return emp; } WEb.Config <system.serviceModel> <services> <service behaviorConfiguration="EmployeesBehavior" name="MySample.MyDemo"> <endpoint address="" binding="webHttpBinding" contract="MySample.IMyDemo" behaviorConfiguration="EmployeesBehavior"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="EmployeesBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="EmployeesBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> MyDemo.htm <head> <title></title> <script type="text/javascript" language="javascript" src="Scripts/jquery-1.4.1.js"></script> <script type="text/javascript" language="javascript" src="Scripts/json.js"></script> <script type="text/javascript"> //create a global javascript object for the AJAX defaults. debugger; var ajaxDefaults = {}; ajaxDefaults.base = { type: "POST", timeout : 1000, dataFilter: function (data) { //see http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ data = JSON.parse(data); //use the JSON2 library if you aren’t using FF3+, IE8, Safari 3/Google Chrome return data.hasOwnProperty("d") ? data.d : data; }, error: function (xhr) { //see if (!xhr) return; if (xhr.responseText) { var response = JSON.parse(xhr.responseText); //console.log works in FF + Firebug only, replace this code if (response) alert(response); else alert("Unknown server error"); } } }; ajaxDefaults.json = $.extend(ajaxDefaults.base, { //see http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ contentType: "application/json; charset=utf-8", dataType: "json" }); var ops = { baseUrl: "/MyService/MySample/MyDemo.svc/", doWork: function () { //see http://api.jquery.com/jQuery.extend/ var ajaxOptions = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "DoWork", data: "{}", success: function (msg) { console.log("success"); console.log(typeof msg); if (typeof msg !== "undefined") { console.log(msg); } } }); $.ajax(ajaxOptions); return false; }, getEmp: function () { var ajaxOpts = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "GetEmp", data: JSON.stringify({ age: 12, name: "NameName" }), success: function (msg) { $("span#lbl").html("age: " + msg.Age + "name:" + msg.Name); } }); $.ajax(ajaxOpts); return false; } } </script> </head> <body> <span id="lbl">abc</span> <br /><br /> <input type="button" value="GetEmployee" id="btnGetEmployee" onclick="javascript:ops.getEmp();" /> </body> I'm just not able to get this running. When I debug, I see the error being returned from the call is " Server Error in '/jQuerySample' Application. <h2> <i>HTTP Error 404 - Not Found.</i> </h2></span> " Looks like I'm missing something basic here. My sample is based on this I've been trying to fix the code for sometime now so I'd like you to take a look and see if you can figure out what is it that I'm doing wrong here. I'm able to see that the service is created when I browse the service in IE. I've also tried changing the setting as mentioned here Appreciate your help. I'm gonna blog about this as soon as the issue is resolved for the benefit of other devs Thanks -Soni

    Read the article

  • Architect Day: Boston - Agenda Update

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. When: September 12, 2012 8:30am – 5:00pm Where: Boston Marriott Burlington One Burlington Mall Road Burlington, MA 01803 Register now Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast Salon E Foyer 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Salon E 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Salon E 10:00 am - 10:30 am Securing Public and Private Clouds | Anton Nielsen Long before the term "Cloud Computing" existed, Oracle technologies supported and promoted the concept. Centralized data with remote users has been at the core of these technologies for decades. The public cloud, and extending private clouds to the internet, though, has added security challenges never imagined decades ago. This presentation will examine a real life security breach and introduce architecture, technologies and policies to secure public and private clouds.  Salon E 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Salon E Innovations in Grid Computing with Oracle Coherence | Rob Misek Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Salon C 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Salon E Oracle Enterprise Manager | Avi Huber Much more than a DB management tool, Oracle Enterprise Manager provides management and monitoring coverage for the entire Oracle stack, and beyond. This session will concentrate on the middleware management functionality in OEM, starting with Real User Experience monitoring, through AppServer management, and into deep-dive Java diagnostics. We’ll discuss Business Driven Application Management (BDAM) and the benefits of top-down monitoring. Lastly, we’ll demonstrate how to trace a specific user experience problem, through a multitier SOA application, to its root cause, deep in the JVM. Salon C 12:15 pm - 1:15 pm Lunch Salon E Foyer 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Salon E 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS). Salon E 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Salon C 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Salon E 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Salon E 4:15 pm - 5:00 pm Networking / Reception Salon E Foyer Note: Session schedule and content subject to change.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Say What? Podcasting As Part of Your Content Marketing

    - by Mike Stiles
    What do you usually do in your car on the way to work?  Sing along to radio? Stream Pandora or iHeartRadio? Talk on the phone? Sit in total silence? Whatever it is you do, you could be using that time to make yourself an expert in any range of topics…using podcasts. We invite you to follow or subscribe to the daily Oracle Social Spotlight podcast, a quick roundup of the day’s top stories around social marketing and the social networks. After podcasts arrived in 2004, growth was steady but slow. The concept was strong: anyone with a passion for any subject could make a show for anyone who cared to listen. Enter the smartphone, iTunes, new podcasting platforms, and social, and podcasting became easier than ever and made more sense for both podcasters and listeners. Stats show 1 in 5 smartphone owners are podcast consumers and 29% of Americans have listened to a podcast. The potential audience is also larger than ever. “Baked in” podcast apps on over 200 million devices expose users to volumes of audio content with just a tap. 97 million Americans are driving to work every day by themselves. And 38% of Americans listen to audio on a digital device each week, a number that’s projected to double by 2015. Does that mean your brand should be podcasting? That’s part of a larger discussion about your overall content strategy, provided you have one. But if you do and podcasting is a component of it, here are some things to keep in mind: Don’t podcast just to do it. Podcast because you thought of a show customers and prospects will like that they can’t get anywhere else. Sound quality matters. Good microphones are not expensive. Bad sound is annoying, makes your brand feel cheap, and will turn today’s sophisticated ears off. The host matters. Many think they belong on the radio. Few actually do. Your brand’s host should be comfortable & likeable. A top advantage of a podcast is people can bond with a real person. It’s a trust opportunity, so don’t take it lightly. The content matters. “All killer, no filler” means don’t allow babbling just to fill enough time for an episode. Value the listeners’ time, because that time is hard to get. Put time, effort and creativity into it. Sure you’re a business, but you’re competing with content from professional media and showbiz producers. If you can include music, sound effects, and things that amuse the ears, do it. If you start, be consistent. The #1 flaw in podcasting is when listeners can’t count on another episode or don’t know when it’s coming. Don’t skip doing shows just because you can. Get committed. Get your cover art right. Podcasting is about audio, but people shop for podcasts by glancing through graphics. Yours has to be professional, cool, and informative to get listeners interested. Cross-promote your podcast on all your channels. The competition for listeners is fierce, so if you have existing audiences you can leverage to launch your show, use them. Optimize it for mobile. Assume that’s where most listening will take place. If you’re using one of the podcast platform apps, you should be in good shape. Frankly, the percentage of brands that are podcasting is quite low, and that’s okay. Once you move beyond blogging and start connecting with real voices, poor execution can do damage. But more (32%) marketers want to learn how to use podcasting, and more (23%) were increasing their podcasting throughout this year. Bottom line, you want to share your brand’s message and stories wherever your audience might be and in whatever way they prefer to take in content. Many prefer to do that while driving or working out, using the eyes and hands-free medium of audio. @mikestilesPhoto: stock.xchng

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >