Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 626/1299 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Controlling the Sizing of the af:messages Dialog

    - by Duncan Mills
    Over the last day or so a small change in behaviour between 11.1.2.n releases of ADF and earlier versions has come to my attention. This has concerned the default sizing of the dialog that the framework automatically generates to handle the display of JSF messages being handled by the <af:messages> component. Unlike a normal popup, you don't have a physical <af:dialog> or <af:window> to set the sizing on in your page definition, so you're at the mercy of what the framework provides. In this case the framework now defines a fixed 250x250 pixel content area dialog for these messages, which can look a bit weird if the message is either very short, or very long. Unfortunately this is not something that you can control through the skin, instead you have to be a little more creative. Here's the solution I've come up with.  Unfortunately, I've not found a supportable way to reset the dialog so as to say  just size yourself based on your contents, it is actually possible to do this by tweaking the correct DOM objects, but I wanted to start with a mostly supportable solution that only uses the best practice of working through the ADF client side APIs. The Technique The basic approach I've taken is really very simple.  The af:messages dialog is just a normal richDialog object, it just happens to be one that is pre-defined for you with a particular known name "msgDlg" (which hopefully won't change). Knowing this, you can call the accepted APIs to control the content width and height of that dialog, as our meerkat friends would say, "simples" 1 The JavaScript For this example I've defined three JavaScript functions.   The first does all the hard work and is designed to be called from server side Java or from a page load event to set the default. The second is a utility function used by the first to validate the values you're about to use for height and width. The final function is one that can be called from the page load event to set an initial default sizing if that's all you need to do. Function resizeDefaultMessageDialog() /**  * Function that actually resets the default message dialog sizing.  * Note that the width and height supplied define the content area  * So the actual physical dialog size will be larger to account for  * the chrome containing the header / footer etc.  * @param docId Faces component id of the document  * @param contentWidth - new content width you need  * @param contentHeight - new content height  */ function resizeDefaultMessageDialog(docId, contentWidth, contentHeight) {   // Warning this value may change from release to release   var defMDName = "::msgDlg";   //Find the default messages dialog   msgDialogComponent = AdfPage.PAGE.findComponentByAbsoluteId(docId + defMDName); // In your version add a check here to ensure we've found the right object!   // Check the new width is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentWidth)){       msgDialogComponent.setContentWidth(contentWidth);   }   // Check the new height is supplied and is a positive number, if so apply it.   if (dimensionIsValid(contentHeight)){       msgDialogComponent.setContentHeight(contentHeight);   } }  Function dimensionIsValid()  /**  * Simple function to check that sensible numeric values are   * being proposed for a dimension  * @param sampleDimension   * @return booolean  */ function dimensionIsValid(sampleDimension){     return (!isNaN(sampleDimension) && sampleDimension > 0); } Function  initializeDefaultMessageDialogSize() /**  * This function will re-define the default sizing applied by the framework   * in 11.1.2.n versions  * It is designed to be called with the document onLoad event  */ function initializeDefaultMessageDialogSize(loadEvent){   //get the configuration information   var documentId = loadEvent.getSource().getProperty('documentId');   var newWidth = loadEvent.getSource().getProperty('defaultMessageDialogContentWidth');   var newHeight = loadEvent.getSource().getProperty('defaultMessageDialogContentHeight');   resizeDefaultMessageDialog(documentId, newWidth, newHeight); } Wiring in the Functions As usual, the first thing we need to do when using JavaScript with ADF is to define an af:resource  in the document metaContainer facet <af:document>   ....     <f:facet name="metaContainer">     <af:resource type="javascript" source="/resources/js/hackMessagedDialog.js"/>    </f:facet> </af:document> This makes the script functions available to call.  Next if you want to use the option of defining an initial default size for the dialog you use a combination of <af:clientListener> and <af:clientAttribute> tags like this. <af:document title="MyApp" id="doc1">   <af:clientListener method="initializeDefaultMessageDialogSize" type="load"/>   <af:clientAttribute name="documentId" value="doc1"/>   <af:clientAttribute name="defaultMessageDialogContentWidth" value="400"/>   <af:clientAttribute name="defaultMessageDialogContentHeight" value="150"/>  ...   Just in Time Dialog Sizing  So  what happens if you have a variety of messages that you might add and in some cases you need a small dialog and an other cases a large one? Well in that case you can re-size these dialogs just before you submit the message. Here's some example Java code: FacesContext ctx = FacesContext.getCurrentInstance();          //reset the default dialog size for this message ExtendedRenderKitService service =              Service.getRenderKitService(ctx, ExtendedRenderKitService.class); service.addScript(ctx, "resizeDefaultMessageDialog('doc1',100,50);");          FacesMessage msg = new FacesMessage("Short message"); msg.setSeverity(FacesMessage.SEVERITY_ERROR); ctx.addMessage(null, msg);  So there you have it. This technique should, at least, allow you to control the dialog sizing just enough to stop really objectionable whitespace or scrollbars. 1 Don't worry if you don't get the reference, lest's just say my kids watch too many adverts.

    Read the article

  • Six Best Practices for Empowering the Customer Experience

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Companies that fail to offer a great Customer eXperience can face declining customer satisfaction numbers and a poor service experience that can be amplified over #social channels. Here are 6 best practices for empowering the Customer Experience. What are your top tips for a great CX? Read the article here

    Read the article

  • Siebel CRM 8.1.1 Solutions

    Listen to George Jacob, Group Vice President, CRM Applications, discuss the new release of Siebel CRM 8.1.1. By empowering end customers and creating a consistent, engaging service experience, businesses are leveraging Siebel CRM 8.1.1 to garner high customer loyalty levels and improve business profitability in this tough economic environment. Tune in today!

    Read the article

  • JavaOne 2011 - Moscow and Hyderabad Editions

    - by Cassandra Clark
    Connect with Java developers at JavaOne - JavaOne will be held in Moscow, April 12-13th, 2011 and again in Hyderabad, May 10th - 11th, 2011. Enjoy two days of technical content and hands-on learning focused on Java and next-generation development trends and technologies, including rich enterprise applications (REAs), service-oriented architecture (SOA), and the database.JavaOne Moscow Tracks - Java EE, Enterprise Computing, and the CloudJava SE, Client Side Technologies, and Rich User ExperiencesJava ME, Mobile, and EmbeddedJavaOne Hyderabad Tracks - Core Java PlatformJava EE, Enterprise Computing, and the CloudJava SE, Client Side Technologies, and Rich User ExperiencesJava ME, Mobile, and EmbeddedRegister Now for JavaOne Moscow!Register Now for JavaOne Hyderabad!

    Read the article

  • What is the best strategy for licensing a desktop application using a web service, when all I need to know is when people use the product?

    - by user1667022
    Our company's main application is a desktop program that is used at warehouses and written in C# and Windows Presentation Forms. The next thing we want to be able to do is track when customers open up the application and when it is being used. The reason for this is so we can charge them per month, based on if they are/arn't using the application. My boss is having me research different ways to "license" the product under these requirements. Not having any experience doing this, a few things come to mind. I could create a web application that runs on a server, and every time the desktop application is opened and the user logs in, the application connects to the server and marks a database with the DateTime. Or is there licensing software that I can use to accomplish this? Just looking for tips/advice from people who have experience with this type of stuff.

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • How can I automatically update Flash Player whenever a new version is released? [closed]

    - by user219950
    Summary: Flash Player Update Service doesn't run on a reliable schedule, and doesn't automatically download and apply updates when it does run. Given the importance of having an up-to-date version of Flash Player installed (for those of us who don't use Chrome with its built-in player), I would like to find a way to ensure that new updates are promptly detected and installed. What follows are the details of my efforts to solve this problem on my own... Appendix A: Flash Player Update Service OK, way back in Flash Player 11.2 (or so?) Adobe added the Flash Player Update Service (FlashPlayerUpdateService.exe), it was supposed to keep the Flash Player updated... Upon installation, FPUS is configured to run as a Windows Service, with Start Type set to Manual. A Scheduled Task (Adobe Flash Player Updater.job) is added to start this service every hour. So far, so good - this set-up avoids having a constantly-running service, but makes sure that the checks are run often enough to catch any updates quickly. Google's software updater is configured in a similar fashion, and that works just fine... ...And yet, when I checked the version of my installed Flash Player, I found it was 11.6.602.180, which, based on looking at the timestamps of the files in C:\Windows\System32\Macromed\Flash was last updated (or installed) on Tue, Mar 12, 2013 --- 3/12/13, 5:00:08pm. I made this observation on Thu, Apr 25, 2013 --- 4/25/13, 7:00:00pm, and upon checking Adobe's website found that the current version of Flash Player was 11.7.700.169. That's over a month since the last update, with a new one clearly available on the website but with no indication that the hourly check running on my machine has noticed it or has any intention of downloading it. Appendix B: running the Flash Player updater manually Once upon a time, running FlashUtil32_<version_Plugin.exe -update plugin would give you a window with an Install button; pressing it would download the installer for the current version (automatically, without opening a browser) and run it, then you'd click thru that installer & be done. It was manual, but it worked! Finding my current installation out of date (see Appendix A), I first tried this manual update process. However... Running FlashUtil32_<version_ActiveX.exe -update activex (in my case, that's FlashUtil32_11_6_602_180_ActiveX.exe -update activex) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/activex. Running FlashUtil32_<version_Plugin.exe -update plugin (in my case, that's FlashUtil32_11_6_602_180_Plugin.exe -update plugin) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/plugin. I could continue with the Download page it sent me to, uncheck the foistware box ("Free! McAfee Security Scan Plus"), download that installer (ActiveX, no foistware: install_flashplayer11x32axau_mssd_aih.exe, Plugin, no foistware: install_flashplayer11x32au_mssd_aih.exe) & probably have an updated Flash...but then, what is the point of the Flash Player Update Service if I have to manually download & run another exe? Epilogue I've since come to suspect that the update service is intentionally hobbled to drive early adopters to the manual download page. If this is true, there's probably no solution to this short of writing my own updater; hopefully I am wrong.

    Read the article

  • Delaying NIS & NFS startup till after network interface is fully ready on Fedora 17

    - by obmarg
    I've recently set up a fedora 17 server for our network, and I've been having trouble getting the NIS service to work on startup. Here's some logs from the system: Aug 21 12:57:12 cairnwell ypbind-pre-setdomain[718]: Setting NIS domain: 'indigo-nis' (environment variable) Aug 21 12:57:13 cairnwell ypbind: Binding NIS service Aug 21 12:57:13 cairnwell rpc.statd[730]: Unable to prune capability 0 from bounding set: Operation not permitted Aug 21 12:57:13 cairnwell systemd[1]: nfs-lock.service: control process exited, code=exited status=1 Aug 21 12:57:13 cairnwell systemd[1]: Unit nfs-lock.service entered failed state. Aug 21 12:57:14 cairnwell setroubleshoot: SELinux is preventing /usr/sbin/rpc.statd from using the setpcap capability. For complete SELinux messages. run sealert -l 024bba8a-b0ef-43dc-b195-5c9a2d4c4d41 Aug 21 12:57:15 cairnwell kernel: [ 18.822282] bnx2 0000:02:00.0: em1: NIC Copper Link is Up, 1000 Mbps full duplex Aug 21 12:57:15 cairnwell kernel: [ 18.822925] ADDRCONF(NETDEV_CHANGE): em1: link becomes ready Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): carrier now ON (device state 20) Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Auto-activating connection 'System em1'. Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Activation (em1) starting connection 'System em1' Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: disconnected -> prepare (reason 'none') [30 40 0] ....... Aug 21 12:57:19 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:26 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:31 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:35 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:58:00 cairnwell ypbind: Binding took 47 seconds Aug 21 12:58:00 cairnwell ypbind: NIS server for domain indigo-nis is not responding. Aug 21 12:58:01 cairnwell ypbind: Killing ypbind with PID 733. Aug 21 12:58:01 cairnwell ypbind-post-waitbind[734]: /usr/lib/ypbind/ypbind-post-waitbind: line 51: kill: SIGTERM: invalid signal specification Aug 21 12:58:01 cairnwell systemd[1]: ypbind.service: control process exited, code=exited status=1 Aug 21 12:58:01 cairnwell systemd[1]: Unit ypbind.service entered failed state. By the looks of these logs the ypbind service is starting up at 12:57:12 but the network interface isn't coming up till 12:57:15. My guess is that this is causing ypbind to time out when trying to connect. As a knock-on effect the NIS failure is causing problems with NFS which is no longer able to map UIDs properly. This problem doesn't seem to be fixed by actually starting ypbind etc. so I've had to set all my NFS shares to noauto. I have tried manually adding NETWORKDELAY and NETWORKWAIT in /etc/sysconfig/network and also running systemctl enable NetworkManager-wait-online.service as I've seen suggested in some places, but neither of these have had any effect. It is relatively easy to fix manually by restarting ypbind & mounting NFS shares after the network has started up, but it's less than ideal to have to do this every time the server has been rebooted. Does anyone know of an easy (and preferably hack-free) way of delaying the ypbind startup till after the network interface is fully ready?

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • Webcast: Attack of the Customers- The rise of the Empowered Consumer

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Watch Paul Gillin, author of “Attack of the Customers: Why Critics Assault Brands Online and How to Avoid Becoming a Victim,” and Oracle Social Cloud Vice President Erika Brookes, talk about the rise of the empowered consumer. Watch now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • Creating Descriptive Flex Field (DFF) Bean in OAF

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a custom DFF in a custom OAF page.I am using XXCUST_DFF_DEMO table to store the DFF values.Also I am using custom DFF named XXCUST_PERSON_DFF.  Following steps needs to be performed to create this solution. 1) Register the custom table in Oracle Application2) Register the DFF3) Define the segments of DFF4) Create BC4J components for OAF and OA Page which holds the DFF I will explain the steps in detail below. Register the custom table in Oracle Application I am using custom DFF here so I have to register the custom table which I am going to capture the values.Please click here to see the table script. I am using the AD_DD package to register the custom table.Please click here to see the table registration script. Please verify the table has registered successfully. Navigation: Application Developer > Application > Database > Table Table has registered successfully. Register the DFF Next step is to register the DFF. Navigate to Application Developer > Flex Field > Descriptive > Register. Give details as below. Click on Reference Fields and set the Reference Field as ATTRIBUTE_CATEGORY. Click on the Columns button to verify that the columns ATTRIBUTE_CATEGORY,ATTRIBUTE1 .... ATTRIBUTE30 are enabled. DFF has registered successfully. Define the segments of DFF Here I am going to define the segments of the DFF.Navigate to Application Developer > Flex Field > Descriptive > Segments.Query for "XXCUST - Person DFF". Uncheck "Freeze Flexfield Definition". In my DFF the reference field I want to display a value set which has values "Permanent" and "Contractor". So define a value set  XXCUST_EMPLOYMENT_TYPE. Navigation: Application Developer > Flex Field > Descriptive > Validation > Sets After that assign the values to above created value sets. Navigation: Application Developer > Flex Field > Descriptive > Validation > Values Assign XXCUST_EMPLOYMENT_TYPE to Context Field Valueset. Setup the Context Field Values based on below table. Context Code Segments Global Data Elements Phone Number Email Fax Contractor Manager Extension Number CSP Name Permanent Extension Number Access Card Number Phone Number,Email and Fax displays always.When user choose Context Value as "Contractor" Manager Extension Number and CSP Name will show.In case of "Permanent" Extension Number and Access Card Number will show.  Assign value set also as follows. For Global Data Elements following are the segments. For "Contractor" following are the segments. For "Permanent" following are the segments. Check the "Freeze Flexfield Definition" check box and save.Standard concurrent program "Flexfield View Generator" will generate XXCUST_DFF_DEMO_DFV view which we mentioned in the DFF registration step.  Now the DFF has created successfully and ready to use. Create BC4J components for OAF and OA Page which holds the DFF Create the BC4J components ( EO,VO and AM) appropriately.Create the page based on the created VO.For DFF create an item of type "flex" with following property.  Note: You cannot create a flex item directly under a messageComponentLayout region, but you can create a messageLayout region under the messageComponentLayout region and add the flex item under the messageLayout region. In the Segment List property give the segment names which you want to display.The syntax of this is Global Data Elements|SEGMENT 1|...|SEGMENT N||[Context Code1]|SEGMENT 1|...|SEGMENT N||[Context Code2]|SEGMENT 1|...|SEGMENT N||... Eg: Global Data Elements|Phone Number|Email|Fax||Contractor|Manager Extension Number|CSP Name||Permanent|Extension Number|Access Card Number When you change the Context Value corresponding segments will display automatically by PPR in the page. You can attach partial action to the DFF bean programmatically so that you can identify the action related to DFF. pageContext.getParameter(EVENT_PARAM) will return "FLEX_CONTEXT_CHANGEDPersonDFF" when you change the DFF Context. Page is ready and you can test. When you choose "Contract" following output you can see. When you choose "Permanent" following output you can see.  Give proper values and press Apply.You can see values populated in the table.

    Read the article

  • Safari 5 certified with EBS Release 12 on Apple Mac OS X 10.5 and 10.6

    - by John Abraham
    Oracle E-Business Suite Release 12 (12.0.4 or higher, and 12.1.2 or higher) is now certified with the Safari 5 browser on the following Apple Mac OS X desktop configurations:Mac OS X 10.5 ("Leopard")Mac OS X 10.5 ("Leopard" version 10.5.6 or higher) along with any other security and Java updates listed in the 'Software Update' program on the MacSafari version 5 (5.0.2 or higher)Apple Java/JRE plugin 5 (1.5.0_13 or higher)Mac OS X 10.6 ("Snow Leopard")Mac OS X 10.6 ("Snow Leopard" version 10.6.3 or higher) along with any other security and Java updates listed in the 'Software Update' program on the Mac.Safari version 5 (5.0.2 or higher)Apple Java/JRE plugin 6 (1.6.0_20 or higher)

    Read the article

  • Steps to Mitigate Database Security Worst Practices

    - by Troy Kitch
    The recent Top 6 Database Security Worst Practices webcast revealed the Top 6, and a bonus 7th , database security worst practices: Privileged user "all access pass" Allow application bypass Minimal and inconsistent monitoring/auditing Not securing application data from OS-level user No SQL injection defense Sensitive data in non-production environments Not securing complete database environment These practices are uncovered in the 2010 IOUG Data Security Survey. As part of the webcast we looked at each one of these practices and how you can mitigate them with the Oracle Defense-in-Depth approach to database security. There's a lot of additional information to glean from the webcast, so I encourage you to check it out here and see how your organization measures up.

    Read the article

  • EBS + 11g Database Upgrade Best Practices Whitepaper Available

    - by Steven Chan
    I returned from OAUG/Collaborate with a cold and multiple overlapping development crises.  Fun.  Now that those are (mostly) out of the way, it's time to get back to clearing out my article backlog.  Premier Support for the 10gR2 database ends in July 2010.  If you haven't already started planning your 11g database upgrade, we recommend that you start soon.  We have certified both the 11gR1 (11.1.0.7) and 11gR2 (11.2.0.1) databases with Oracle E-Business Suite; see this blog's Certification summary to links to articles with the details.Our Applications Performance Group has reminded me that they have a whitepaper loaded with practical tips intended to make your 11g database upgrade easier.  No vacuous marketing rhetoric here -- this is strictly written for DBAs.  A must-read if you haven't already upgraded to either 11gR1 or 11gR2, and highly recommended even if you have.  You can download this whitepaper here:Upgrade to 11g Performance Best Practices (PDF, 184K)

    Read the article

  • EBS 12.0 Minimum Requirements for Extended Support Finalized

    - by Steven Chan
    Oracle E-Business Suite Release 12.0 will transition from Premier Support to Extended Support on February 1, 2012.  New EBS 12.0 patches will be created and tested during Extended Support against the minimum patching baseline documented in our E-Business Suite Error Correction Support Policy (Note 1195034.1).These new technical requirements have now been finalized.  To be eligible for Extended Support, all EBS 12.0 customers must apply the EBS 12.0.6 Release Update Pack, technology stack infrastructure updates, and updates for EBS products if they're shared or fully-installed.  The complete set of minimum EBS 12.0 baseline requirements are listed here:E-Business Suite Error Correction Support Policy (Note 1195034.1)

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • New Tuxedo White Papers

    - by todd.little
    As part of the Tuxedo 11gR1 release, I've written two new white papers on Tuxedo. One is called "Tuxedo in a SOA World" and discusses how Tuxedo fits into SOA based applications. It covers most of the various connectivity options from Tuxedo into SOA environments and gives guidance as to which connectivity options are best suited for a particular application requirement. The other white paper "SCA: Bringing Modern SOA Programing to Tuxedo" is of a more technical bent and focuses on using the SCA features in SALT to easily build SOA based applications on Tuxedo without using a lot of technical APIs. In fact, services built using SALT's SCA support don't require any technical APIs, just pure business logic, and SCA clients need at most a couple of API calls, simply to look up a service. You can find these two new white papers as well as some additional white papers at http://www.oracle.com/technology/products/tuxedo/index.html.

    Read the article

  • EPM 11.1.1 - EPM Infrastructure Tuning Guide v11.1.1.3

    - by Ahmed Awan
    This edition applies to EPM 9.3.1, 11.1.1.1, 11.1.1.2 & 11.1.1.3 only. INTRODUCTION:One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle EPM System performance, all components need to be monitored, analyzed, and tuned. This guide describe the techniques used to monitor performance and the techniques for optimizing the performance of EPM components. Click to Download the EPM 11.1.1.3 Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this file)

    Read the article

  • Tom Kyte szeminárium Budapesten, 2010. ápr. 19-20.

    - by Fekete Zoltán
    Még lehet jelentkezni Tom Kyte 2010-es budapesti szemináriumára, az itt található linkeken keresztül: - 2010. április 19-20., Budapesten tantermi szeminárium keretében. Témák: The top 10 - no 11 - new features of Oracle; Database 11g; All about binding; Materialized Views, Caching; Effective Indexing; Storage Techniques; Reorganizing objects - when and how. ASK TOM!

    Read the article

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • New ADF Desktop Integration Components Demo Available

    - by juan.ruiz
    The new ADF Desktop Integration Components demo is available on the OTN and can be downloaded and ran in a standalone way.  The demo follows the same principles of the  ADF Faces and DVT Components demo, presenting the list of all the components available with its corresponding properties as well as feature demos exposing advanced functionality of the framework. It is a great demo that exposes from the very basics of each visual component as well as advanced functionality such as dependent list of values, master-details integration, event handling, integration with ADF Faces and web interfaces and parameter passing between ADF applications to excel. You need JDeveloper PS1 (11.1.2.0) and a Oracle database to install the sample schema. Your feedback is always welcome -let us know what you think. Access the demo here.

    Read the article

  • Mike Neuenschwander on the Identity Platform

    - by Naresh Persaud
    If you are in London on March 22nd, check out the Identity Platform Event. Mike is deeply passionate about the platform. I caught up with Mike recently for an interview to discuss his perspective on the Oracle Identity Platform. Identity Management is not a department level initiative. To unlock the business potential of Identity Management, we have to think organizationally and holistically. To learn more about how to take a strategic approach to Identity Management, visit one of our physical events globally.  Here are some of the listings and registrations world wide: North America, Asia Pacific, Europe .

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >