Search Results

Search found 15426 results on 618 pages for 'concordus applications'.

Page 190/618 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Hyperion EPM 11.1.2.3 Webcast Tutorials

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} These LIVE presentation Webcast Tutorials for Partners will be delivered in August 2013: Oracle Hyperion Planning on Exalytics In-Memory Machine - August 6, 2013 Oracle Hyperion Tax Provision - August 8, 2013 Oracle Planning and Budgeting Cloud Service - August 13, 2013 Go here for more details and to register for these. There are also new updated Webcast Tutorials for Oracle Partners in our EPM 11.1.2.3 Update Series: Oracle Hyperion Planning 11.1.2.3 (PS3) Oracle Hyperion Calculation Manager 11.1.2.2 Refresher and 11.1.2.3 (PS3) NEW Oracle Data Relationship Management 11.1.2.3 (PS3) NEW Oracle Hyperion Financial Data Quality Management 11.1.2.3 (PS3) NEW Oracle Hyperion Financial Close Suite 11.1.2.3 (PS3) NEW Oracle Hyperion Profitability & Cost Management 11.1.2.3 (PS3) Introducing Oracle Data Relationship Governance (DRG) Also note new content for Oracle BI Applications 11g with ODI: NEW Overview and Architecture of Oracle BI Applications 11.1.1.7.1 for ODI NEW Configuring Oracle BI Applications 11.1.1.7.1 for ODI These are all part of the compilation of Oracle BI/EPM online tutorials and webinars for Partners, where you can find many topics are covered. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • How to deploy global managed beans

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} "Global managed" beans is the term I use in this post to describe beans that are used across applications. Global managed beans contain helper - or utility - methods like or instead of JSFUtils and ADFUtils. The difference between global managed beans and static helper classes like JSFUtis and ADFUtils is that they are EL accessible, providing reusable functionality that is ready to use on UI components and - if accessed from Java - in other managed beans. For example, the ADF Faces page template (af:pageTemplate) allows you to define attributes for the consuming page to pass in object references or strings into it. It does not have method attributes that allow command components contained in a template to invoke listeners in managed beans and the ADF binding layer, or to execute actions. To create templates that provide global button or menu functionality, like logon, logout, print etc., an option for developers is to deployed managed beans with the ADF Faces page templates. To deploy a managed bean with a page template, create an ADF library from the project containing the template definition and import this ADF library into the target project using the Resource palette. When importing an ADF library, all its content is added to the project, including page template definitions, managed bean sources and configurations. More about page templates http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_pageTemplate.html Another use-case for globally configured managed beans is for creating helper methods to be used in many applications. Instead of creating a base managed bean class that then is extended by all managed beans used in applications, you can deploy a managed bean in a JAR file and add the faces-config.xml file with the managed bean configuration to the JAR's META-INF folder as shown below. Using a globally configured managed bean allows you to use Expression Language in the UI to access common functionality but also use Java in application specific managed beans. Storing the faces-config.xml file in the JAR file META-INF directory automatically makes it available when the JAR file is found in the class path of an application.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Paper-free Customer Engagement

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Appropriate repost from our friends at the AIIM blog: Digital Landfill -- John Mancini, supporting our mission of enabling customer engagement through better technology choices.  ---------- My wife didn't even give me a card for #wpfd - and they say husbands are bad at remembering anniversaries Well, today is the third World Paper Free Day.  I just got off the Tweet Jam, and there was a host of ideas for getting rid of -- or at least reducing -- paper. When we first started talking about "paper-free" most of the reasons raised to pursue this direction were "green" reasons.  I'm glad to see that the thinking has moved on to questions about how getting rid of paper and digitizing processes helps improve customer engagement.  And the bottom line.  And process responsiveness.  Not that the "green" reasons have gone away, but it's nice to see a maturation in the BUSINESS reasons to get rid of paper. Our World Paper Free Handbook (do not, do not, do not print it!) looks at how less paper in the workplace delivers significant benefits. Key findings show eliminating paper from processes can improve the responsiveness of customer service by 300 percent. Removing paper from business processes and moving content to PCs and tablets has the added advantage of helping companies adopt mobile-enable processes and eliminate elapsed time, lost forms, poor data and re-keying. To effectively mobile-enable processes and reduce reliance on paper, data should be captured as close to the point of origination as possible, which makes information easily available to whomever needs it, wherever they are, in the shortest time possible. This handbook summarizes the value of automating manual, paper-based processes. It then goes a step beyond to provide actionable steps that will set you on the path to productivity, profitability, and, yes, less paper.  Get your copy today and send the link around to your peers and colleagues.  Here's the link; please share it! http://www.aiim.org/Research-and-Publications/Research/AIIM-White-Papers/WPFD-Revolution-Handbook And don't miss out on the real world discussions about increasing engagement with WebCenter in new webinars being offered over the next couple of weeks:  October 30, 2012:  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter November 1, 2012: WebCenter Content for Applications: Streamline Processes with Oracle WebCenter Content Management for Human Resources Applications Available On-Demand:  Using Oracle WebCenter to Content-Enable Your Business Applications

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • MSDeploy doesn't deploy to remote server using MSBuild and Visual Studio 2010

    - by user317762
    I'm currently running Visual Studio Team System 2010 RC and I'm trying to get the Build Service setup to build my solution and deploy 3 web applications in it. I've created a custom build configuration called Integration and I've setup the "IIS Web site/application name to use on the destination server" on the Package/Publish tab of the Properties for each of the web applications. In my Build Definition I've set the following arguments: /p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MSDeployPublishMethod=InProc /p:MsDeployServiceUrl=http://my-server-name:8172/msdeploy.axd /p:EnablePackageProcessLoggingAndAssert=True However, when I run the build I get the following error, for all three web applications: Updating setAcl (RightContent). C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets(3481,5): error : Web deployment task failed. (Attempted to perform an unauthorized operation.) I don't think this is my actual problem though. This error is occuring after the following entry in the log: Updating setAcl This is what's causing the error message, but it appears that MSDeploy is trying to deploy to the local IIS on the Build server, not the server I specified with the MsDeployServiceUrl parameter. After looking at the targets file at C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets, I added the EnablePackageProcessLoggingAndAssert, which adds extra logging. The log shows an emptry string for the value of MsDeployServiceUrl. I also noticed in the target that MsDeployServiceUrl has a lowercase s, which is somewhat confusing because the task name MSDeployPublish has an uppercase S. I tried using it using uppercase, then again using lowercase, but neither worked. A couple other things to note: My build service is running as NETWORK SERVICE. The server I'm trying to deploy to is on another domain. I also tried adding /p:username=mydomain\myusername /p:password=mypassword to the MSBuild paramter list, but that didn't help. Does anyone know if I'm supplying the correct parameters? Or provide me with the correct ones? Thanks

    Read the article

  • Obtaining MFC Feature Pack GUI elements in .NET WinForms

    - by Cody Gray
    The MFC Feature Pack (and VS 2010) adds out-of-the-box support for several "modern" GUI elements (such as MDI with tabbed documents, the ribbon, and a Visual Studio-style interface with docking panels). These are a boon to those of us that have to support legacy MFC-based applications and want to update their look-and-feel, and a sign that Microsoft has not completely abandoned unmanaged C++ development. However, with the push so strongly in favor of .NET, WinForms, and managed code (and for plenty of good reasons), there seems little reason to develop new applications in unmanaged C++/MFC. The question then becomes how does one obtain these GUI elements in a WinForms application. Almost all of the add-ons and libraries I have found so far cost money, and introduce additional dependencies. I don't have a budget to buy third-party libraries, and the controls provided by Microsoft in MFC for free seem sufficient for our needs. But I still have reservations about learning MFC to develop a new application. Not only does the investment in time seem significant (by all accounts, MFC seems particularly difficult to learn, even for experienced .NET developers--although I am willing to try), but the question of MFC's lifespan is raised as well. Certainly, given the millions of lines of code and existing apps written in native C++, it will be around for some time, but the handwriting seems to be on the wall, so to speak, that it's no longer Microsoft's touted development platform. It seems like these features should be available by now in WinForms without the need for third-party add-ons, or devoting a lot of time and resources to custom-drawing EVERYTHING. Am I just missing something? I find very little online that compares these new features of MFC to what is available in WinForms, mainly because most everything written on MFC pre-dated its most recent update, before which it looked admitted "dated," and with its other flaws, was hardly an appealing platform for new development. With the very recent release of VS 2010, we have a while to wait before WinForms gets updated again. What routes are you guys taking for applications whose customers demand a modern-looking UI on a budget?

    Read the article

  • Is Appcelerator Titanium now banned on the iPhone?

    - by altuzar
    This question has been answered quite clearly for MonoTouch here: http://stackoverflow.com/questions/2604033/is-monotouch-now-banned-on-the-iphone But what about Appcelerator Titanium? The new TOS from Apple and their iPhone 4 OS: 3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited). Titanium uses JavaScript but is not executed be the iPhone OS WebKit engine directly. In their Developer blog, Jeff Haynie says Titanium is on the clear, but I don't know if they are in denial. It’s our belief that we are fully in compliance with iPhone OS 4.0 ToS as we interpret them. I haven't found any official word by Apple, only opinions. And I'm quite confussed. I'm not writing another line of code for my App until... you know.

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >