Search Results

Search found 1678 results on 68 pages for 'nintex workflow'.

Page 54/68 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Introducing .NET 4.0 with Visual Studio 2010 by Alex Mackey - Book review

    - by Malisa L. Ncube
    Alex (http://simpleisbest.co.uk/) does a very good job in covering the new features of .NET 4.0 and Visual Studio 2010. His focus is on the developers that have experience in development using previous versions of Visual Studio, more specifically Visual Studio 2008.     The following are my views towards his book. 1. Scope / Coverage Even as the book is labeled as introduction, it is covers a broad spectrum of technologies, features and references that are focused into helping a developer quickly decide what to use in the new .NET framework. a. Content The content included covers as much as possible the new additions that are included in the new .NET version 4.0. He shows the Visual Studio 2010 new features and quickly shows how to extend it using Managed Extensibility Framework. Some of my favorites are parallel debugging enhancements. The author delves into JQuery, which Microsoft has decided to support. Some of the very interesting content is on the out-of-band releases including ASP.NET MVC, Windows Azure Silverlight 3 and WCF Data Services. b. What is not included? Windows Phone 7 Series. This was only talked about in the MIX10. The data may not have been available at the time of writing. Microsoft Pinpoint (Microsoft code name "Dallas") Windows Embedded development. c. Table of Contents Chapter 1: Introduction Chapter 2: Visual Studio IDE and MEF Chapter 3: Language and Dynamic Changes Chapter 4: CLR and BCL Changes Chapter 5: Parallelization and Threading Enhancements Chapter 6: Windows Workflow Foundation 4 Chapter 7: Windows Communication Foundation Chapter 8: Entity Framework Chapter 9: WCF Data Services Chapter 10: ASPNET Chapter 11: Microsoft AJAX Library Chapter 12: jQuery Chapter 13: ASPNET MVC Chapter 14: Silverlight Introduction Chapter 15: WPF 4.0 and Silverlight 3.0 Chapter 16: Windows Azure 2. Depth Avoids getting into depth on the topics presented, to present the new concepts in assumption of the developer’s existing knowledge. Code samples are on book and exist mostly as snippets and very easy to follow. There are no downloadable examples. 3. Complexity The book is written in a very simple way and easy to follow. There are no irrelevant intimidating details. So it’s a book that you can grab and never put down until you’ve finished reading the entire book. 4. References The author includes reference links to blogs, Wikis and a lot of online resources including the MSDN documentation, which is a very convenient strategy to avoid flooding the reader with details which may not be of interest to them. Most sites do not use url routing and that is really not nice. There are notes from interviews between the author and people behind the new technologies, in which they explain what some specific areas that need clarifications and what their future views are in relation to the features they are working on. 5. Target The author targets experts that want to make a transition from .NET 3.5 to 4.0. Some obvious 3.5 features have been purposely excluded from the text 6. Overrall It is evident that the author has made extensive research into the breadth of what MS is working on, in relation to .NET and Visual Studio and has also been watching the online community. What I would like to see in the next edition are some details on OData protocol, Expression Blend 4 and Embedded development and Windows Phone development. I should say I’m one of the beneficiaries of this book. Excellent work Alex.   Technorati Tags: .NET,Book-Review,Visual Studio

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • SOA Community Newsletter June 2013

    - by JuergenKress
    Dear SOA partner community member Thanks for showing us your interest to rerun the Fusion Middleware Summer Camps! After knowing your suggestions we are happy to announce the 3rd edition of our advanced Fusion Middleware training. The camps will take place from August 26th - 30th 2013 in Lisbon Portugal. Topics will include Adaptive Case Management (ACM) as part of BPM Suite, b2b, Advanced SOA and SOA Governance. Please make sure you plan and book your seat in advance - (Booking is on the basis of first come first seat!). Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the SOA Suite 11g Essentials or BPM Suite 11g Certified Implementation Specialist, you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a SOA or BPM specialization you can request a nice Plaques for your office. As part of our Industrial SOA article services we published “Canonizing a Language for Architecture” in the Service Technology Magazine and on Oracle Technology Network. If you write books or a blog - make sure you share it with us! Cloud Computing is the hottest topic in IT, specially as an architect you should be aware of the concepts and technology, therefore I highly recommend you Thomas Erl’s latest book named “Cloud Computing”. In the BPM space, Adaptive Case Management (ACM) is the hottest topic, with BPM PS6 the backend ACM functionality and an ACM sample application are available. You can even combine this hype with Customer Experience. The BPM section in this newsletter reflects the high importance of the topic and includes BPM PS6 video showing process lifecycle,BPM Resource Kit, Functional Testing, Introduction to Web Forms, Customized Workspace Application and Instance Patching Demo. B2B also become more and more popular in the Oracle SOA Suite. If you could not attend the training organized in the month May, we offer you an additional B2B training as a part of the Summer Camps or you can download the B2B training material from our SOA Community Workspace (SOA Community membership required). Thanks to all for sharing the valuable SOA content with our community! Special thanks to ec4u for the new reference of SOA Suite and AIA Foundation Pack at a Swiss insurance company. It is time to submit a SOA and BPM  reference request today! In this edition of the newsletter you will see Guido and Ronald's second part of OSB article series and Kathiravan Udayakumar's published an exclusive article on SOA Suite best practice. If you want to submit your content for the next edition of the Newsletter then please feel free to submit it to myself. The A-Team is an excellent contributor to the best practice - make sure you visit the new A-Team page and read their articles such as Getting to know Maven. Also on the SOA side, we have published many new articles from the community Oracle SOA Suite for the Busy IT Professional by Frank Munz, SOA Suite Knowledge - Polyglot Service Implementation with Groovy by Alexander Suchier, QA82 Analyzer - Automated Quality Assurance for Oracle SOA Suite Projects, Verifying the Target by Anthony Reynolds and a new book called Oracle SOA Governance 11g Implementation book by Luis Augusto Weir. Two new SOA on-demand training courses NEW - Oracle Business Rules Self-Study Course & Introduction Human Workflow online course are available now! Make use of the Summer Time and get trained - hope to see you in Lisbon for the Summer Camps! Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsJune2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,SOA,BPM

    Read the article

  • Windows Azure Recipe: Mobile Computing

    - by Clint Edmonson
    A while back, mashups were all the rage. The idea was to compose solutions that provided aggregation and integration across applications and services to make information more available, useful, and personal. Mashups ushered in the era of Web 2.0 in all it’s socially connected goodness. They taught us that to be successful, we needed to add web service APIs to our web applications. Web and client based mashups met with great success and have evolved even further with the introduction of the internet connected smartphone. Nothing is more available, useful, or personal than our smartphones. The current generation of cloud connected mobile computing mashups allow our mobilized workforces to receive, process, and react to information from disparate sources faster than ever before. Drivers Integration Reach Time to market Solution Here’s a sketch of a prototypical mobile computing solution using Windows Azure: Ingredients Web Role – with the phone running a dedicated client application, the web role is responsible for serving up backend web services that implement the solution’s core connected functionality. Database – used to store core operational and workflow data for the solution’s web services. Access Control – this service is used to authenticate and manage users identity, roles, and groups, possibly in conjunction with 3rd identity providers such as Windows LiveID, Google, Yahoo!, and Facebook. Worker Role – this role is used to handle the orchestration of long-running, complex, asynchronous operations. While much of the integration and interaction with other services can be handled directly by the mobile client application, it’s possible that the backend may need to integrate with 3rd party services as well. Offloading this work to a worker role better distributes computing resources and keeps the web roles focused on direct client interaction. Queues – these provide reliable, persistent messaging between applications and processes. They are an absolute necessity once asynchronous processing is involved. Queues facilitate the flow of distributed events and allow a solution to send push notifications back to mobile devices at appropriate times. Training & Resources These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Windows Azure Toolkit for Windows Phone The Windows Azure Toolkit for Windows Phone is designed to make it easier for you to build mobile applications that leverage cloud services running in Windows Azure. The toolkit includes Visual Studio project templates for Windows Phone and Windows Azure, class libraries optimized for use on the phone, sample applications, and documentation Windows Azure Toolkit for iOS The Windows Azure Toolkit for iOS is a toolkit for developers to make it easy to access Windows Azure storage services from native iOS applications. The toolkit can be used for both iPhone and iPad applications, developed using Objective-C and XCode. Windows Azure Toolkit for Android The Windows Azure Toolkit for Android is a toolkit for developers to make it easy to work with Windows Azure from native Android applications. The toolkit can be used for native Android applications developed using Eclipse and the Android SDK. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • regarding the Windows Phone 7 series, XNA and Visual Basic

    - by Chris Williams
    as long as we're talking about VB... I figured I would share this as well. Hi everyone, I'm about to express a sentiment that might ruffle a few feathers, but I think most of you know me well enough to know I love like accept VB for what it is and that what I'm about to say is with good intentions. (The rest of you, who don't know me, please take my word for it.) The world is full of VB developers, I was one of them for a long time. I think it's safe to assume that none of us are ignorant people who require handholding. We're working professionals, making a living by using our skills as developers. I'm also willing to bet that quite a few of us are fluent in C# as well as VB. It may not be your preferred language, but many of you can do it and you prove that nearly every day. Honestly, I don't know ANY developers or consultants that have only known ONE language ever. So it pains me greatly when I see the word "CAN'T" being tossed around like a crutch... as in "we CAN'T develop for the windows phone or we CAN'T develop XNA games." At MIX, Microsoft hath decreed that C# is the language of choice for developing for the Windows Phone 7. I think it's a safe bet that you won't see VB support if it isn't there already. (Just like XNA... which is up to version 4.0 by now.)  So what? (Yeah... I said it.) I think everyone here can agree that actual coding is only one part of software design and development. There is nothing stopping ANY of you from beginning the process of designing your killer phone app, writing up specs, requirements, doing UI design, workflow, mockups, storyboards, art, etc.... None of these things are language dependent. IF by the time you've got that stuff out of the way, and there's still no VB support, then start doing some rapid prototyping of your app in C# (I know, I know... heresy!)  You still have to spend time learning how the phone does things, what UI tricks do what, what paradigms make sense, how to use to accelerometer and the tilt and the multitouch functionality. I can guarantee you that time spent doing this is a great investment, no matter WHAT extension your code files have. Eventually, you may have a working prototype. IF by this time, there's STILL no VB support... fret not, you've made significant progress on your app. You've designed it, prototyped it, figured out how to use the phone specific features... so you might as well finish it and pat yourself on the back for learning something new... and possibly being first to market with your new app. I'll be happy to argue any and all of these points online or off with anyone who cares to do so, but there is one undeniable point that you simply can't argue:  Your potential customers do not care AT ALL what programming language you used to write the app they are about to purchase. They care that it works. If your biggest concern is being first to market, than stop complaining and get busy because you're running out of time and the 3000+ people who were at MIX certainly aren't waiting for you. They've already started working on their apps.

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • The All New Hotmail Looks Very Impressive [Video Tour]

    - by Gopinath
    With loads of new new features being introduced into GMail every now and then, Microsoft can’t sit and relax any more. Microsoft realized this and worked hard to introduce really impressive features in upcoming version of Windows Live Hotmail that was previewed couple of days ago. Most of the new features announced in the upcoming version are focusing on the important need of email users – de-clutter the mail box and effectively manage email over load easily. Here is the list highlight of new features New Features Sweep away clutter – This is the most impressive in the set of new features. It allows you to manage email overload. If you’ve subscribed to a newsletter but decided to not to allow it into your inbox, you can activate the sweep feature to move all the messages of the newsletter in to a folder other than your inbox. This may sound similar to filters option in GMail but the workflow is very easy in Hotmail. Quickly find message – Easy to use options are provided to see mails in separate views likes mails from contacts, social networking mail, mails from e-mail subscription services, etc. Now it’s easy to prioritize email checking like how you wish to. I prefer to check mails from my contacts first, then social networking messages and then the newsletter subscriptions. Improved spam detection – The span detection rules are tightened for better spam protection and also hotmail learns from user actions to effectively catch spam No more mail box storage restrictions – With a smart decision of Microsoft, users  no longer need to worry about the storage restrictions of their mail box – large attachments of hotmail can be stored in Windows Live SkyDrive. With Hotmail, we’ve combined the simplicity of sending photos through email with the power of Windows Live SkyDrive so that you can send up to 200 photos, each up to 50 MB in size, all in a single email. You can send all your vacation photos at once without worrying about attachment limits, Excellent Integration With Office Web Apps -  View and editing of office documents attached to the emails are made very easy by integrating Office Web Apps with Hotmail. When you receive a document/presentation/spreadsheet in hotmail, you can view it, edit it, save it or even you can send the modified document to original sender – all these without leaving hotmail. Inline viewing options for Photos, Videos, Social Network Messages – You can view photos embedded in the mail as slideshows(with the help of SilverLight), YouTube  & Hulu videos can be played inline  and track shipping notifications. Threaded conversations – emails in Hotmail are grouped just like it happens in GMail Others - enhanced account protection, full-session SSL, multiple email accounts, subfolders, contact management Video Tour Of New Features Here is an impressive video tour of new Hotmail features. When are these new features coming to Hotmail? Majority of the new features announced today are rolled out in coming weeks gradually to all the users. But advanced features like Office Integration with Hotmail is expected to take couple of months for general availability. Will You Switch back to Hotmail? Will these features lure GMail/Yahoo users to switch back to Hotmail? May be not immediately but these features may hold the existing users from leaving Hotmail. I used Hotmail, in the pre GMail era and now I use  Hotmail id only to sign-in to Microsoft websites that requites Hotmail authentication. It’s been years since I composed a new email in Hotmail. Even though the new features announced by Hotmail are very impressive, I like the way how GMail rapidly brings new features at regular intervals. If Hotmail also keeps innovating with new features at regular intervals, then there are good chances for it’s old users to return home. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Release 17 is here!

    - by Cheryl
    Our training development team has been busy updating courses to keep pace with the new release of CRM On Demand. Release 17 is here! And I heard recently that it's one of our biggest releases ever. A lot of new features and functionality for you to take advantage of - too much for me to cover in this blog post. But, I thought I'd tell you about a few of my favorites - be sure to take a look at the What's New in Release 17 recording to see the full list, though...because I'm only going to touch on a few. Create your own look - okay, I'm starting with the fun stuff. But, there is a new customizable themes feature so that you can change the look of the application; colors, logo, the shape of the tabs. And it's really easy. There's also a whole new library of ready-made themes for you to pick from if you just want to go with one of those. Use this new feature to match the look of your company logo and color scheme. Or blaze new trails. You can create the look for the whole company, or a different look for each CRM On Demand role. This might especially come in handy if you're using the Partner Relationship Management (PRM) capabilities of CRM On Demand - you can create themes for your partner-facing roles to provide branded partner portals. Speaking of PRM - there are enhancements in this release to help companies better manage their partner relationships. A new Deal Registration object, which is separate from the Opportunity record, and better Special Pricing Request and Marketing Development Fund Request processes, give a lot more flexibility in how companies can build and manage their relationships with partners. Some new options for Forecasts in in Release 17, too. You can now have more than one type of forecast generated each forecast period. For example, you might need to see a forecast of the total opportunity revenue for your sales team, as well as on that breaks down revenue by product. The forecast definition now lets you do that. Other options allow you to make submitting forecasts easier, split opportunity revenue across the team and forecast that split appropriately. And - look for the new Forecast subject area in Answers, for building custom forecast reports. Ever wish you could use Workflow Rules to automatically reassign leads if they haven't been followed up on...or to email a manager if the status of a service request isn't changed after a specified period of time? Then check out the new Wait action for workflows. I think you'll be happy. Ok, enough for today. There is a lot to Release 17 that I didn't mention - a lot has been added for our Life Science industry edition, some new data visibility options, a new Data Loader tool, and more. Stay tuned for more blog posts about these and other Release 17 features in the coming weeks. In the meantime, don't forget about all of the resources we have for you to learn more (see my Learning About Release 17 blog post for details).

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

  • SharePoint 2010 Hosting :: Setting Default Column Values on a Folder Programmatically

    - by mbridge
    The reason I write this post today is because my initial searches on the Internet provided me with nothing on the topic.  I was hoping to find a reference to the SDK but I didn’t have any luck.  What I want to do is set a default column value on an existing folder so that new items in that folder automatically inherit that value.  It’s actually pretty easy to do once you know what the class is called in the API.  I did some digging and discovered that class is MetadataDefaults. It can be found in Microsoft.Office.DocumentManagement.dll.  Note: if you can’t find it in the GAC, this DLL is in the 14/CONFIG/BIN folder and not the 14/ISAPI folder.  Add a reference to this DLL in your project.  In my case, I am building a console application, but you might put this in an event receiver or workflow. In my example today, I have simple custom folder and document content types.  I have one shared site column called DocumentType.  I have a document library which each of these content types registered.  In my document library, I have a folder named Test and I want to set its default column values using code.  Here is what it looks like.  Start by getting a reference to the list in question.  This assumes you already have a SPWeb object.  In my case I have created it and it is called site. SPList customDocumentLibrary = site.Lists["CustomDocuments"]; You then pass the SPList object to the MetadataDefaults constructor. MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary); Now I just need to get my SPFolder object in question and pass it to the meethod SetFieldDefault.  This takes a SPFolder object, a string with the name of the SPField to set the default on, and finally the value of the default (in my case “Memo”). SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"]; columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo"); You can set multiple defaults here.  When you’re done, you will need to call .Update(). columnDefaults.Update(); Here is what it all looks like together. using (SPSite siteCollection = new SPSite("http://sp2010/sites/ECMSource")) {     using (SPWeb site = siteCollection.OpenWeb())     {         SPList customDocumentLibrary = site.Lists["CustomDocuments"];         MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary);          SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"];         columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo");         columnDefaults.Update();     } } You can verify that your property was set correctly on the Change Default Column Values page in your list This is something that I could see used a lot on an ItemEventReceiver attached to a folder to do metadata inheritance.  Whenever, the user changed the value of the folder’s property, you could have it update the default.  Your code might look something columnDefaults.SetFieldDefault(properties.ListItem.Folder, "MyField", properties.ListItem[" This is a great way to keep the child items updated any time the value a folder’s property changes.  I’m also wondering if this can be done via CAML.  I tried saving a site template, but after importing I got an error on the default values page.  I’ll keep looking and let you know what I find out.

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Abstraction, Politics, and Software Architecture

    Abstraction can be defined as a general concept and/or idea that lack any concrete details. Throughout history this type of thinking has led to an array of new ideas and innovations as well as increased confusion and conspiracy. If one was to look back at our history they will see that abstraction has been used in various forms throughout our past. When I was growing up I do not know how many times I heard politicians say “Leave no child left behind” or “No child left behind” as a major part of their campaign rhetoric in regards to a stance on education. As you can see their slogan is a perfect example of abstraction because it only offers a very general concept about improving our education system but they do not mention how they would like to do it. If they did then they would be adding concrete details to their abstraction thus turning it in to an actual working plan as to how we as a society can help children succeed in school and in life, but then they would not be using abstraction. By now I sure you are thinking what does abstraction have to do with software architecture. You are valid in thinking this way, but abstraction is a wonderful tool used in information technology especially in the world of software architecture. Abstraction is one method of extracting the concepts of an idea so that it can be understood and discussed by others of varying technical abilities and backgrounds. One ways in which I tend to extract my architectural design thoughts is through the use of basic diagrams to convey an idea for a system or a new feature for an existing application. This allows me to generically model an architectural design through the use of views and Unified Markup Language (UML). UML is a standard method for creating a 4+1 Architectural View Models. The 4+1 Architectural View Model consists of 4 views typically created with UML as well as a general description of the concept that is being expressed by a model. The 4+1 Architectural View Model: Logical View: Models a system’s end-user functionality. Development View: Models a system as a collection of components and connectors to illustrate how it is intended to be developed.  Process View: Models the interaction between system components and connectors as to indicate the activities of a system. Physical View: Models the placement of the collection of components and connectors of a system within a physical environment. Recently I had to use the concept of abstraction to express an idea for implementing a new security framework on an existing website. My concept would add session based management in order to properly secure and allow page access based on valid user credentials and last user activity.  I created a basic Process View by using UML diagrams to communicate the basic process flow of my changes in the application so that all of the projects stakeholders would be able to understand my idea. Additionally I created a Logical View on a whiteboard while conveying the process workflow with a few stakeholders to show how end-user will be affected by the new framework and gaining additional input about the design. After my Logical and Process Views were accepted I then started on creating a more detailed Development View in order to map how the system will be built based on the concept of components and connections based on the previously defined interactions. I really did not need to create a Physical view for this idea because we were updating an existing system that was already deployed based on an existing Physical View. What do you think about the use of abstraction in the development of software architecture? Please let me know.

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Why I Love the Social Management Platform I Use

    - by Mike Stiles
    Not long ago, I asked the product heads for the various components of the Oracle Social Cloud’s SRM to say what they thought was coolest about their component. And while they did a fine job, it was recently pointed out to me that no one around here uses the platform in a real-world setting more than I do, as I not only blog and podcast my brains out, I also run Oracle Social’s own social properties. Of course I’m pro-Oracle Social’s product. Duh. But if you can get around immediately writing this off as a puff piece, there are real reasons beyond my employment that the Oracle SRM works for me as a community manager. If it didn’t, I could have simply written about something else, like how people love smartphones or something genius like that. Post Grid I like seeing what I want to see. I’m difficult that way. Post grid lets me see all posts for all channels, with custom columns showing me how posts are doing. I can filter the grid by social channel, published, scheduled, draft, suggested, etc. Then there’s a pullout side panel that shows me post details, including engagement analytics. From the pullout, I can preview the post, do a quick edit, a full edit, or (my favorite) copy a post so I can edit it and schedule it for other times so I don’t have to repeat from scratch. I’m not lazy, just time conscious. The Post Creation Environment Given our post volume, I need this to be as easy as it can be. I can highlight which streams I want the post to go out on, edit for the individual streams, maintain a media library that’s easy to upload to and attach from, tag posts, insert links that auto-shorten to an orac.le shortlink, schedule with a nice calendar visual, geo-target, drop photos inline into Twitter, and review each post. Watching My Channels The Engage component of the Oracle SRM brings in and drops into a grid the activity that’s happening on all my channels. I keep this open round-the-clock. Again, I get to see only what I want; social network, stream, unread messages, engagement by how I labeled them, and date range. I can bring up a post with a click, reply, label it, retweet it, assign it, delete it, archive it, etc. So don’t bother trying to be a troll on my channels. Analytics Social publishing and engaging 24/7 would be pretty unrewarding if I couldn’t see how our audience was responding. Frankly, I get more analytics than I know what to do with (I’m a content creator, not a data analyst). But I do know what numbers I care about, and they’re available by channel, date range, and campaigns. I’m seeing fan count, sources and demographics. I’m seeing engagement, what kinds of posts are getting engagement, and top engagers. I’m seeing my reach, both organic and paid. I’m seeing how individual posts performed in terms of engagement and virality, and posting time/date insight. Have I covered all the value propositions? I’ve covered pathetically few of them. It would be impossible in blog length to give shout-outs to the vast number of features and functionalities. From organizing teams and managing permissions with Workflow to the powerful ability to monitor topics (and your competition) across the web in Listen, it’s a major, and increasingly necessary, weapon in your social marketing arsenal. The life of a Community Manager is not for everybody. So if the Oracle SRM can actually make a Community Manager’s life easier, what’s not to love? I invite you to take a look at and participate in our Oracle Social Cloud social channels! Facebook Twitter YouTube Google Plus LinkedIn Daily Podcast on iHeartRadio @mikestiles @oraclesocial Photo: freeimages.com

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >