Search Results

Search found 1678 results on 68 pages for 'nintex workflow'.

Page 51/68 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Design cache mechanism

    - by Delashmate
    Hi All, I got assignment to write design for cache mechanism, This is my first time writing a design document, Our program display images for doctors, and we wan't to reduce the parsing time of the images So we want to save the parsed data in advance (in files or inside database) Currently I have several design key ideas Handle locks - each shared data structure should be handled, also files Test - add test to verify the data from the cache is equal to the data from the files To decouple the connection to the database- not to call directly to the database Cleanup mechanisem- to delete old files if the cahce directory exceed configurable threshold Support config file Support performance tool in the feature I will also add class diagram, data flow charts, and workflow What do you think I should add to the key ideas? Do you know good link to atricales about design? Thanks in advance, Dan

    Read the article

  • BlackBerry - Multiple Screens or Single Screen with Content Manager?

    - by Max Gontar
    Hi! I've seen projects which use many screens each one for different layout and functionality. I've seen projects with only one screen (like wizard workflow) where content is changed on user interaction (and this seems to be logical to use single screen in wizards). But also I've seen projects (apps like game or messenger or phone settings utility) which use single screen for different functionalities. I can see such advantages of having single screen in app: keep same decoration design and menu or toolbar (which may be also achieved with inheritance) keep single screen in ui stack (which may be achieved by pop/push screen) easy to handle data over application Can you tell other advantages/disadvantages of single screen app? When its better to use this approach? Thank you!

    Read the article

  • Oracle Insurance Unveils Next Generation of Enterprise Document Automation: Oracle Documaker Enterprise Edition

    - by helen.pitts(at)oracle.com
    Oracle today announced the introduction of Oracle Documaker Enterprise Edition, the next generation of the company's market-leading Enterprise Document Automation (EDA) solution for dynamically creating, managing and delivering adaptive enterprise communications across multiple channels. "Insurers and other organizations need enterprise document automation that puts the power to manage the complete document lifecycle in the hands of the business user," said Srini Venkatasanthanam, vice president, Product Strategy, Oracle Insurancein the press release. "Built with features such as rules-based configurability and interactive processing, Oracle Documaker Enterprise Edition makes possible an adaptive approach to enterprise document automation - documents when, where and in the form they're needed." Key enhancements in Oracle Documaker Enterprise Edition include: Documaker Interactive, the newly renamed and redesigned Web-based iDocumaker module. Documaker Interactive enables users to quickly and interactively create and assemble compliant communications such as policy and claims correspondence directly from their desktops. Users benefits from built-in accelerators and rules-based configurability, pre-configured content as well as embedded workflow leveraging Oracle BPEL Process Manager. Documaker Documaker Factory, which helps enterprises reduce cost and improve operational efficiency through better management of their enterprise publishing operations. Dashboards, analytics, reporting and an administrative console provide insurers with greater insight and centralized control over document production allowing them to better adapt their resources based on business demands. Other enhancements include: enhanced business user empowerment; additional multi-language localization capabilities; and benefits from the use of powerful Oracle technologies such as the Oracle Application Development Framework for all interfaces and Oracle Universal Content Management (Oracle UCM) for enterprise content management. Drive Competitive Advantage and Growth: Deb Smallwood, founder of SMA Strategy Meets Action, a leading industry insurance analyst consulting firm and co-author of 3CM in Insurance: Customer Communications and Content Management published last month, noted in the press release that "maximum value can be gained from investments when Enterprise Document Automation (EDA) is viewed holistically and all forms of communication and all types of information are integrated across the entire enterprise. "Insurers that choose an approach that takes all communications, both structured and unstructured data, coming into the company from a wide range of channels, and then create seamless flows of information will have a real competitive advantage," Smallwood said. "This capability will soon become essential for selling, servicing, and ultimately driving growth through new business and retention." Learn More: Click here to watch a short flash demo that demonstrates the real business value offered by Oracle Documaker Enterprise Edition. You can also see how an insurance company can use Oracle Documaker Enterprise Edition to dynamically create, manage and publish adaptive enterprise content throughout the insurance business lifecycle for delivery across multiple channels by visiting Alamere Insurance, a fictional model insurance company created by Oracle to showcase how Oracle applications can be leveraged within the insurance enterprise. Meet Our Newest Oracle Insurance Blogger: I'm pleased to introduce our newest Oracle Insurance blogger, Susanne Hale. Susanne, who manages product marketing for Oracle Insurance EDA solutions, will be sharing insights about this topic along with examples of how our customers are transforming their enterprise communications using Oracle Documaker Enterprise Edition in future Oracle Insurance blog entries. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • ¿Oficina sin papeles?

    - by [email protected]
    Recientemente hemos organizado un evento de Digitalización para mostrar algunos de los últimos productos de Oracle en éste área.Siempre tendemos a pensar que en España estamos retrasados en estas tecnologías y que el mercado no está preparado para eliminar el papel. En algunos casos es cierto, pero también nos hemos llevado sorpresas con clientes extremadamente avanzados en la gestión electrónica del papel.Para los clientes que no tienen una solución corporativa ya desplegada, nuestra oferta de Imaging les parece completa e integrada, porque les permite digitalizar el papel en el punto más cercano a su recepción y posteriormente realizar todo el trámite interno de forma digital.Este proceso es el que se muestra en la siguiente imágen: Sobre todo en el entorno financiero los clientes ya tienen grandes infraestructuras desplegadas (algunos con funcionalidades muy sofisticadas que han desarrollado a medida durante estos últimos años).En estos casos, su interés está centrado en 2 capacidades clave de nuestros productos: La digitalización distribuidaEl OCR inteligenteCuando ya disponemos de una infraestructura de digitalización centralizada, tenemos varios puntos de mejora con los que conseguir mayores ratios de ahorro en la gestión del papel. Uno de ellos es digitalizar en origen, de forma que ahorraremos en logística de desplazamiento y almacenamiento de papel (reducimos valijas) y en velocidad de arranque de los procesos (desde el momento de la recepción).El hecho de poder hacer esto sólo con un explorador de internet es muy novedoso para los clientes.El no instalar ninguna pieza de software de cliente parece que es un requisito que muchos clientes estaban demandando desde hace tiempo. De hecho, estamos realizando demos en vivo con un escáner del cliente (solo necesitamos el driver de windows para ese escáner). El resultado es sorprendente porque mostramos cómo: escaneamos con sólo un explorador de internet; el documento escaneado, con sus metadatos, se incorporan al gestor documental; y se dispara su workflow de aprobación.Hacer esto en segundos es algo que genera mucho interés en los clientes de cara a acelerar la gestión de muchos de sus trámites en papel.Por último, lo más novedoso de la oferta es el OCR inteligente. Hay quien ya tiene absolutamente operativas sus infraestructuras de digitalización con todas estas capacidades, y buscan un paso más allá con el reconocimiento inteligente de todos los metadatos posibles.El beneficio es rápido, claramente cuantificable y muy alto. El software de OCR inteligente se basa en lógica difusa y nos permite definir los umbrales de validación totalmente adecuados a nuestros factores de confianza. Es decir, configuramos el umbral para que cuando el software acepta un acierto tengamos la seguridad total de que dichos metadatos se han reconocido perfectamente. En caso contrario, el software lanza una validación manual.¿Qué pasa si conseguimos que para determinados documentos, el 40%, 50%, 60% o incluso el 70% u 80% de ellos fueran procesados 100% automáticamente?. El ahorro es inmenso, la reducción del tiempo de proceso también, y la integración con nuestras infraestructuras de digitalización es muy sencilla (basta con desviar unos cuantos documentos de un tipo concreto a Oracle Forms Recognition y evaluar el resultado).Os animo a que veáis estos productos y consigamos hacer realidad la reducción de papel.

    Read the article

  • Deployment Options for AutoVue 20.0 Users

    - by celine.beck
    AutoVue release 20.0 boasts a brand new architecture. As part of this product rearchitecture, AutoVue can now be deployed either as a desktop deployment to serve the needs of individual users in their personal productivity; or in a Client / Server deployment for those that require connections to enterprise applications / back-end systems. The most common question that we hear from our customers about this new architecture is the following: "Is AutoVue Desktop Version still part of release 20.0 and if so, what is the difference between AutoVue Desktop Version and the Desktop deployment of AutoVue release 20.0?" A detailed answer to these questions is provided in a very complete article entitled Understanding Deployment Options for AutoVue 19.3 Desktop Version users upgrading to AutoVue 20.0 (note 1058254.1) which was posted on My Oracle Support. Is AutoVue Desktop Version still part of AutoVue 20.0? Yes, AutoVue Desktop Version 20.0 is still available to customers and partners, as a maintenance release of AutoVue 19.3. As such, it will not contain any of the new capabilities featured in AutoVue release 20.0. All format enhancements and new format support have been added to release 20.0 Desktop Version though. What is the different between AutoVue Desktop Version 20.0 and the Desktop Deployment of AutoVue release 20.0? AutoVue 20.0 Desktop deployment works like the AutoVue Desktop version. It is installed as a standalone product on each user's machine and runs a local instance of AutoVue. The AutoVue 20.0 Desktop deployment includes all new features, formats and performance enhancements included in release 20.0 (walkthrough capability, improved compare, ...) What deployment options are available to AutoVue 19.3 Desktop Version customers? AutoVue Desktop Version users can evolve at their own pace to the new AutoVue platform. With release 20.0, customers can opt to: Option 1: Stay on AutoVue Desktop Version 20.0 Option 2: Migrate to AutoVue and select the desktop deployment method Option 3: Migrate to AutoVue and select the Client/Server deployment method What is the Client / Server deployment of AutoVue 20.0? The Client/Server deployment has AutoVue installed on a server, to which local client machines connect to access and view documents. AutoVue 20.0 Client Server Deployment allows users to leverage the new online/offline capabilities in release 20.0 and easily switch between online and offline modes of operation. With the Client/Server deployment, customers also get a complete, open and standards-based set of integration tools that allows them to tie AutoVue to any enterprise applications to provide users with a consistent view of data and business objects and expand workflow automation to document-based processes. Related articles: AutoVue Release 20.0 Now Available, New Walkthrough Capability in AutoVue 20.0, Watch the AutoVue 20.0 Release Webcast, April 27 at 12pm EST

    Read the article

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • Silverlight Cream for March 08, 2010 -- #809

    - by Dave Campbell
    In this Issue: Michael Washington, Tim Greenfield, Bobby Diaz(-2-), Glenn Block(-2-), Nikhil Kothari, Jianqiang Bao(-2-), and Christopher Bennage. Shoutouts: Adam Kinney announced a Big update for the Project Rosetta site today Arpit Gupta has opened a new blog with a great logo: I think therefore I am dangerous :) From SilverlightCream.com: DotNetNuke Silverlight Traffic Module If it's DNN and Silverlight, it has to be my buddy Michael Washington :) ... Michael has combined those stunning gauges you've seen with website traffic... just too cool!... grab the code and display yours too! Cool demonstration of Silverlight VideoBrush This is a no-code post by Tim Greenfield, but I like the UX on this Jigsaw Puzzle page... and you can make your own. Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1 Bobby Diaz has an informative post up on combining earthquake data with BingMaps in Silverlight 3... check it out, the grab the recently posted Live Demo and Source Code Adding Volcanos and Options - Earthquake Locator, part 2 Bobby Diaz also added volcanic activity to his earthquake BinMaps app, and updated the downloadable code and live demo. Building Hello MEF – Part IV – DeploymentCatalog Glenn Block posted a pair of MEF posts yesterday... made me think I missed one :) .. the first one is about the DeploymentCatalog. Note he is going to be using the CodePlex bits in his posts. Building HelloMEF – Part V – Refactoring to ViewModel Glenn Block's part V is about MEF and MVVM -- no, really! ... he is refactoring MVVM into the app with a nod to Josh Smith and Laurent Bugnion... get your head around this... The Case for ViewModel Nikhil Kothari has a post up about the ViewModel, and how it facilitates designer/developer workflow, jumpstarts development, improves scaling, and makes asynch programming development simpler MMORPG programming in Silverlight Tutorial (12)Map Instance (Part I) Jianqiang Bao has part 12 of his MMORPG game up... this one is showing how to deal with obstuctions on maps. MMORPG programming in Silverlight Tutorial (13)Perfect moving mechanism Jianqiang Bao also has part 13 up, and this second one is about sprite movement around the obstructions. 1 Simple Step for Commanding in Silverlight Christopher Bennage blogged about Commanding in Silverlight, he begins with a blog post about commands in Silverlight 4 then goes on to demonstrate the Caliburn way of doing commanding. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • COLLABORATE 12: Oracle WebCenter Featured at Largest Oracle User Conference

    - by kellsey.ruppel
    With more than 70 out of about 800 individual sessions, Oracle WebCenter will be a major focus of COLLABORATE 12, this year's Independent Oracle User Group (IOUG) conference, taking place April 22–26 in Las Vegas, Nevada. "COLLABORATE 12 provides a unique chance to share experiences with Oracle customers, product managers, and partners, so you can deepen your knowledge about Oracle WebCenter upgrades, user provisioning, workflow, integration, and much more," says Roel Stalman, vice president of product management for Oracle WebCenter. "In fact, COLLABORATE can form a key part of your training plans for 2012." Full-Day Oracle WebCenter Deep Dive On Sunday, April 22, from 9 a.m. to 3 p.m., registered conference attendees can attend a special deep dive into Oracle WebCenter. During the program, experts from Oracle product management and development teams will delve into all four pillars of Oracle WebCenter—and explore how all four are integrated together. Attendees can also expect A preview of Oracle WebCenter 12c Detailed product demos Prize giveaways throughout the day Going Mobile Oracle WebCenter and mobile technology will be a major theme at this year's conference, with a number of sessions devoted to maximizing the availability of content while also ensuring security. Sessions include Are You Making These Mistakes in Your Oracle Site Studio Implementations? Monday, April 23 at 11 a.m. Case Study: How Medtronic Brought Oracle WebCenter Content to the iPad Tuesday, April 24 at 10:45 a.m. Exposing Oracle WebCenter Data on Mobile and Desktop Devices Through the REST API Tuesday, April 24 at 10:45 a.m. Mobile First: Delivering a Compelling Mobile Experience with Oracle WebCenter Tuesday, April 24 at 4:30 p.m. Optimizing Your Oracle WebCenter Portal Solution for Mobile Devices Wednesday, April 25 at 8:15 a.m. Build an iPhone App Using Oracle WebCenter Portal REST APIs Wednesday, April 25 at 9:30 a.m. Other Don't-Miss Sessions Conference organizers have indicated that the following sessions in particular should be of wide interest to attendees. Oracle WebCenter: Vision, Strategy, and Overview Monday, April 23 at 9:45 a.m. This session explores Oracle's integrated approach to portals and composite applications, Web experience management, enterprise content management, and enterprise social collaboration. It also provides insight into Oracle's strategic direction for Oracle WebCenter. Oracle Webcenter Content, Oracle WebCenter Spaces, Oracle WebCenter Sites: Which Is Right for Me? Monday, April 23 at 1:15 p.m. This session helps attendees determine the best Oracle WebCenter solution to meet their needs for an intranet, corporate Website, or partner portal. Learn more and register to attend COLLABORATE 12.

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile

    - by Michelle Kimihira
    With the upcoming release of Oracle ADF Mobile, I caught up with Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware post OpenWorld to learn about the cool hands-on lab at OpenWorld.  For those of you who missed it, you will want to keep reading... Author: Srikant Subramaniam, Senior Principal Product Manager,Oracle Fusion Middleware Oracle ADF Mobile enables rapid and declarative development of native on-device mobile applications. These native applications provide a richer experience for smart devices users running Apple iOS or other mobile platforms. Oracle ADF Mobile protects Oracle customers from technology shifts by adopting a metadata-based development framework that enables developer to develop one app (using Oracle JDeveloper), and deploy to multiple device platforms (starting with iOS and Android).  Oracle ADF Mobile also enables IT organizations to leverage existing expertise in web-based and Java development by adopting a hybrid application architecture that brings together HTML5, Java, and device native container: HTML5 allows developer to deliver device-native user experiences while maintaining portability across different platforms Java allows developers to create modules to support business logic and data services Native container provides integration into device services such as camera, contacts, etc All these technologies are packaged into a development framework that supports declarative application development through Oracle JDeveloper. ADF Mobile also provides out of box integratoin with key Fusion Middleware components, such as SOA Suite and Business Process Management (BPM). Oracle Fusion Middleware provides the necessary infrastructure to extend business processes and services to the mobile device -- enabling the mobile user to participate in human tasks – without the additional “mobile middleware” layer. When coupled with Oracle SOA Suite, this combination can execute business transactions on Oracle E-Business Suite (or any Oracle Application). Demo Use Case: Mobile E-Business Suite (iExpense) Approvals Using an employee expense approval scenario, we illustrate how to use Oracle Fusion Middleware and Oracle ADF Mobile to build application extensions that integrate intelligently with Oracle Applications (For example, E-Business Suite). Building these extensions using Oracle Fusion middleware and ADF makes modifications simple, quick to implement, and easy to maintain/upgrade. As described earlier, this approach also extends Fusion Middleware to mobile users without the additional "Mobile Middleware" layer. The approver is presented with a list of expense reports that have been submitted for approval. These expense reports are retrieved from the backend E-Business Suite and displayed on the mobile device. Approval (or rejection) of the expense report kicks off the workflow in E-Business Suite and takes it to completion. The demo also shows how to integrate with native device services such as email, contacts, BI dashboards as well as a prebuilt PDF viewer (this is especially useful in the expense approval scenario, as there is often a need for the approver to access the submitted receipts). Summary Oracle recommends Fusion Middleware as the application integration platform to deliver critical enterprise data and processes to mobile applications.  Pre-built connectors between Fusion Middleware and Applications greatly accelerates the integration process.  Instead of building individual integration points between mobile applications and individual enterprise applications, Oracle Fusion Middleware enables IT organizations to leverage a common platform to support both desktop and mobile application.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • How can I get my progress reviewed as a solo junior developer

    - by Oliver Hyde
    I am currently working for a 2 person company, as the solo primary developer. My boss gets the clients, mocks up some png design templates and hands them over to me. This system has been working fine and i'm really enjoying it. The types of projects I work on are for small - medium sized businesses and they usually want a CMS system. Developed from scratch i'll build a customised backend for the client to add/edit/remove categories, tags, products etc and then output them to the front end according to the design template handed to me. As time has gone on, the projects have increased in complexity, with shopping cart / ordering features and other common e-commerce type features. Again, this system has been working fine and i'm really enjoying it. My issue is my personal development as a programmer. I spend a lot of my spare time reading programming blogs, checking through stackexchange, reading suggested programming books (currently on 'The Pragmatic Programmer', really good so far), doing brain exercises (lumosity.com and khanacademy math problems), doing lots of physical exercise and other personal development type activities. I can't help but feel though, that I'm missing out on feedback, critique. My boss is great and never holds back on praise in regards to my work, but he unfortunately is either to busy to check my code, or to be honest, I don't think it's one of his specialties and so can't provide feedback. I want to know what i'm doing wrong and what i'm doing right. Should I be putting that much logic in the controller, am I modulating my code enough etc. So what I have done is developed a little 'Family Budgeting' app and tried to do it as cleanly and effectively as I currently know how. What i'm wanting to know is, is there somewhere I can submit this app, and have some seasoned developers provide feedback. It's not just a subsection of my code like 'codereview.stackexchange' appears to require, it's my entire workflow that I want critiqued. I know this is a lot to ask, and I expect the main advice given will be to look for a job within a team, which is certainly something I will look into later down the track, but for now I want to persist with my current employment situation, but just don't want to develop too many bad habits. Let me know if I can provide any further information to help clarify, or if this isn't the right place for this type of question I apologise in advance. Didn't want to use reddit as I felt this community fosters more well thought out responses.

    Read the article

  • Oracle and Eloqua Welcome Compendium’s Content Marketing

    - by Mike Stiles
    Yesterday, Oracle announced its acquisition of Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers' lifecycle. Why? Because every part of the above paragraph speaks to where modern marketing is and where it’s headed. Customers have now been empowered, thanks to the Internet and particularly social, with access to almost limitless amounts of information about companies and products. This includes the especially influential voices of friends and objective acquaintances that have experience with the product or brand. With mobile, this info is available instantly in the palm of their hand. All of this research and influence mind you, is taking place long before a prospect will ever engage with the brand itself or one of its sales reps. So how does a brand effectively insert itself into these conversations and this flow of the customer journey? Now, more than ever, marketers must deliver relevant and engaging content across multiple channels and throughout the entire customer journey to be useful, helpful, and influential. Compendium has a data-driven content marketing platform that lines up relevant content with customer data and personas so brands can accelerate the conversion of prospects. Now think about combining that with the Oracle Eloqua Marketing Cloud, part of Oracle's comprehensive CX solution. Marketers will be able to automate content delivery across channels by aligning persona-based content with customers' digital body language. Better customer engagement, improved sales lead quality, better return on marketing investment, and higher customer loyalty. Now we’re talking. Does data-driven content marketing have an impact? Compendium customer CVENT is a SaaS company specializing in meetings management tech. They wanted to increase leads & ad performance on their blog and dramatically increase their content. They also wanted to manage the creation, workflow, promotion and distribution of that content. With Compendium, CVENT created over 9,000 content elements, and sales-ready leads grew 325%. So Oracle Eloqua helps you target audiences, know buyers, and automate multi-channel marketing campaigns. Compendium lets you plan, publish, manage and measure content across content types and channels. Now kick it up yet another notch with Oracle’s Analytics, Big Data and Social solutions, and you’re using your marketing dollars to reach the right people in the right place at the right time with the right content. And as if that weren’t enough, your customers will love you for it. @mikestiles

    Read the article

  • top tweets SOA Partner Community – June 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Oracle SOA Learn how Business Rules are used in Oracle SOA Suite. New free self-study course - Oracle Univ. #soa #oraclesoa http://pub.vitrue.com/ll9B OPITZ CONSULTING ?Wie #BPM und #SOA zusammengehören? Watch 100-Seconds-Video-Lesson by @Rolfbaer - http://ow.ly/luSjK @soacommunity Andrejus Baranovskis ?Customized BPM 11g PS6 Workspace Application http://fb.me/2ukaSBXKs Mark Nelson ?Case Management Samples Released http://wp.me/pgVeO-Lv Mark Nelson Instance Patching Demo for BPM 11.1.1.7 http://wp.me/pgVeO-Lx Simone Geib Antony Reynolds: Target Verification #oraclesoa https://blogs.oracle.com/reynolds/ OPITZ CONSULTING ?"It's all about Integration - Developing with Oracle #Cloud Services" @t_winterberg files: http://ow.ly/ljtEY #cloudworld @soacommunity Arun Pareek ?Functional Testing Business Processes In Oracle BPM Suite 11g http://wp.me/pkPu1-pc via @arrunpareek SOA Proactive Want to get started with Human Workflow? Check out the introductory video on OTN, http://pub.vitrue.com/enIL C2B2 Consulting Free tech workshop,London 6th of Jun Diagnosing Performance & Scalability Problems in Oracle SOASuite http://www.c2b2.co.uk/oracle_fusion_middleware_performance_seminar … @soacommunity Oracle BPM Must have technologies for delivering effective #CX : #BPM #Social #Mobile > #OracleBPM Whitepaper http://pub.vitrue.com/6pF6 OracleBlogs ?Introduction to Web Forms -Basic Tutorial http://ow.ly/2wQLTE OTNArchBeat ?Complete State of SOA podcast now available w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/PZFw Ronald Luttikhuizen VENNSTER Blog | Article published - Fault Handling and Prevention - Part 2 | http://blog.vennster.nl/2013/05/article-published-fault-handling-and.html … Mark Nelson ?Getting to know Maven http://wp.me/pgVeO-Lk gschmutz ?Cool! Our 2nd article has just been published: "Fault Handling and Prevention for Services in Oracle Service Bus" http://pub.vitrue.com/jMOy David Shaffer Interesting SOA Development and Delivery post on A-Team Redstack site - http://bit.ly/18oqrAI . Would be great to get others to contribute! Mark Nelson BPM PS6 video showing process lifecycle in more detail (30min) http://wp.me/pgVeO-Ko SOA Proactive ?Webcast: 'Introduction and Troubleshooting of the SOA 11g Database Adapter', May 9th. Register now at http://pub.vitrue.com/8In7 Mark Nelson ?SOA Development and Delivery http://wp.me/pgVeO-Kd Oracle BPM Manoj Das, VP Product Mangement talks about new #OracleBPM release #BPM #processmanagement http://pub.vitrue.com/FV3R OTNArchBeat Podcast: The State of SOA w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/OK2M gschmutz New article series on Industrial SOA started on OTN and Service Technology Magazine: http://guidoschmutz.wordpress.com/2013/04/22/first-two-chapters-of-industrial-soa-articles-series-have-been-published-both-on-otn-and-service-technology-magazine/ … #industrialSOA Danilo Schmiedel ?Article series #industrialSOA published on OTN and Service Technology Magazine http://inside-bpm-and-soa.blogspot.de/2013/04/industrial-soa_22.html … @soacommunity @OC_WIRE SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: twitter,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • Announcing the New Virtual Briefing Center

    - by Theresa Hickman
    Do you want to hear about real-world customer success stories? Or listen to Oracle Application leaders discuss the value in the latest releases of Oracle Application products? Do you want one place to download up-to-date content, including white papers, podcasts, webcasts and presentations? Did you miss the Virtual Trade Show at the beginning of 2011? If you answered yes to any of these questions, then the Virtual Briefing Center is the place to get up-to-date Oracle product information for Oracle E-Business Suite, PeopleSoft, JD Edwards, Fusion, Siebel and Hyperion across multiple product areas from financials, procurement, supply chain, CRM, Performance Management, and more. Every month we will have "Monthly Spotlights" to showcase new content. The following lists the upcoming live webcasts in July 2011: Weds. July 6, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear about Amway’s upgrade to Oracle E-Business Suite 12.1 and how they stabilized financial modules, especially the month-end close processes. Thurs. July 14, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear West Corporation share their PeopleSoft 9.1 upgrade, resulting in improved self-service, more robust reporting capabilities and new workflow and processes. Thurs. July 21, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Learn how MFlex improved their operations, saved manpower and reduced time to close with their upgrade to JD Edwards EnterpriseOne 9.0. Thurs. July 28, 2011 at 9:00 a.m. PST/12:00 p.m. EST: IEEE discusses their upgrade to Siebel 8.1 using open web service architecture for faster SOA enablement allowing them to scale their membership capacity by 250%. If you cannot attend any of the above live events, that's OK because each of the webcasts in this series will be recorded and available on demand. And for you Financials folks who may have missed the webcasts from the Virtual Trade Show earlier this year, you can view them on demand by Visiting the Resource Library: Planning Your Successful Upgrade to Oracle E-Business Suite Financials 12.1. In this session, Bryant and Stratton College talk about their upgrade. Planning Your Successful Upgrade to PeopleSoft Financials 9.1. In this session, the University of Central Florida share their upgrade story. Fusion Financials: The New Standard for Finance. In this session, Terrance Wampler, the VP of Financial Application Strategy discusses the business value of Oracle's next generation financial applications and how customers can take advantage of Fusion Financials alongside their existing investments. What are you waiting for? Register now!

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >