Search Results

Search found 6374 results on 255 pages for 'double underscore'.

Page 92/255 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Qml and QfileSystemModel interaction problem

    - by user136432
    I'm having some problem in realizing an interaction between QML and C++ to obtain a very basic file browser that is shown within a ListView. I tried to use as model for my data the QT class QFileSystemModel, but it did't work as I expected, probably I didn't fully understand the QT class documentation about the use of this model or the example I found on the internet. Here is the code that I am using: File main.cpp #include <QModelIndex> #include <QFileSystemModel> #include <QQmlContext> #include <QApplication> #include "qtquick2applicationviewer.h" int main(int argc, char *argv[]) { QApplication app(argc, argv); QFileSystemModel* model = new QFileSystemModel; model->setRootPath("C:/"); model->setFilter(QDir::Files | QDir::AllDirs); QtQuick2ApplicationViewer viewer; // Make QFileSystemModel* available for QML use. viewer.rootContext()->setContextProperty("myFileModel", model); viewer.setMainQmlFile(QStringLiteral("qml/ProvaQML/main.qml")); viewer.showExpanded(); return app.exec(); } File main.qml Rectangle { id: main width: 800 height: 600 ListView { id: view property string root_path: "C:/Users" x: 40 y: 20 width: parent.width - (2*x) height: parent.height - (2*y) VisualDataModel { id: myVisualModel model: myFileModel // Get the model QFileSystemModel exposed from C++ delegate { Rectangle { width: 210; height: 20; radius: 5; border.width: 2; border.color: "orange"; color: "yellow"; Text { text: fileName; x: parent.x + 10; } MouseArea { anchors.fill: parent onDoubleClicked: { myVisualModel.rootIndex = myVisualModel.modelIndex(index) } } } } } highlight: Rectangle { color: "lightsteelblue"; radius: 5 } focus: true } } The first problem with this code is that first elements that I can see within my list are my PC logical drives even if I set a specific path. Then when I first double click on drive "C:\" it shows the list of files and directories on that path, but when I double click on a directory a second time the screen flickers for one moment and then it shows again the PC logical drives. Can anyone tell me how should I use the QFileSystemModel class with a ListView QML object? Thanks in advance! Carlo

    Read the article

  • Access Windows Home Server from an Ubuntu Computer on your Network

    - by Mysticgeek
    If you’re a Windows Home Server user, there may be times when you need to access it from an Ubuntu machine on your network. Today we take a look at the process of accessing files on your home server from Ubuntu. Note: In this example we’re using Windows Home Server with PowerPack 3, and Ubuntu 10.04 running on a home network. Access WHS from Ubuntu To access files on your home server from Ubuntu, click on Places then select Network. You should now see your home server listed in the Network folder as well as other Windows machines…double-click the server to access it. If you don’t see your server listed, you might need to go into Windows Network \ Workgroup and find it there. You’ll be prompted to enter in the correct credentials for WHS just as you would when accessing it from a Windows machine. It’s your choice if you want to have the password remembered or not…make your selection and click Connect. Now you will see the available folders on your home server. In this example we signed in with Administrator credentials, so we have access to everything. Double-click on the folder share you want to access content from…here we see MS Office documents on the server. Or, here we take a look at a music folder with various MP3 files which you can make Ubuntu play. You can access the files directly from the server, provided there is a Linux app that can handle the file type. In this example we opened a Word document in OpenOffice. Here we’re playing an MKV movie file from the server in Totem Movie Player.   You can easily search for files on the server as well… If you want to store your Ubuntu files on WHS it’s just a matter of dragging them to the correct WHS folder you want them in. If you’re using an Ubuntu computer on your home network and need to access files from Windows Home Server, luckily it’s a straight-forward process. You’ll often have to find the correct software to use Windows files, but even that’s getting much easier with version 10.04. Similar Articles Productive Geek Tips Share Ubuntu Home Directories using SambaCreate a Samba User on UbuntuGMedia Blog: Setting Up a Windows Home ServerRestore Files from Backups on Windows Home ServerInstall Samba Server on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets

    Read the article

  • ASP.NET 4 Hosting :: How to Debug Your ASP.NET Applications

    - by mbridge
    Remote debugging of a process is a privilege, and like all privileges, it must be granted to a user or group of users before its operation is allowed. The Microsoft .NET Framework and Microsoft Visual Studio .NET provide two mechanisms to enable remote debugging support: The Debugger Users group and the "Debug programs" user right. Debugger Users Group When you debug a remote .NET Framework-based application, the Debugger on your computer must communicate with the remote computer using DCOM. The remote server must grant the Debugger access, and it does this by granting access to all members of the Debugger Users group. Therefore, you must ensure that you are a member of the Debugger Users group on that computer. This is a local security group, meaning that it is visible to only the computer where it exists. To add yourself or a group to the Debugger Users group, follow these steps: 1. Right-click the My Computer icon on the Desktop and choose Manage from the context menu. 2. Browse to the Groups node, which is found under the Local Users and Groups node of System Tools. 3. In the right pane, double-click the Debugger Users group. 4. Add your user account or a group account of which you are a member. Debug Programs User Right To debug programs that run under an account that is different from your account, you must be granted the "Debug programs" user right on the computer where the program runs. By default, only the Administrators group is granted this user right. You can check this by opening Local Security Policy on the computer. To do so, follow these steps: 1. Click Start, Administrative Tools, and then Local Security Policy. 2. Browse to the User Rights Assignment node under the Local Policies node. 3. In the right pane, double-click the "Debug programs" user right. 4. Add your user account or a group account of which you are a member.

    Read the article

  • SEO - Delimiter character for page title

    - by cept0
    I have noticed a few oddities recently with the titles of web pages in SERPs. However, it seems there are several main conventions: Contact Page - Joe Schmoe's Awesome Site // &#045; Hyphen Contact Page — Joe Schmoe's Awesome Site // &mdash; Em dash Contact Page | Joe Schmoe's Awesome Site // &#x007C; Vertical bar Contact Page « Joe Schmoe's Awesome Site // &laquo; Left double angle quotes Is there any reason to use one over the other?

    Read the article

  • Error occurred in deployment step ‘Add Solution’: Value does not fall within the expected range.

    - by ybbest
    I got this error when I deploy a new content type into SharePoint2010.This error message is not very helpful after a few hours of hard work, I find out there is a dash(-) in my contentID.It happen when I generate GUID using visual studio tool and copy it to the ID and forgot to delete the dash. After correct this, I press deploy again, it works like charm. If you got this error double-check your contentID. If you are not quite sure what is a Valid contentTypeID, check the MSDN documentation here.

    Read the article

  • Scanner Review: HP Scanjet Professional 1000

    Your notebook computer's newest companion: a $249 ultraportable scanner that takes next to no briefcase space to turn double-sided documents, images, and business cards into PDFs, e-mails, and Outlook entries. We put the peripatetic peripheral to the test.

    Read the article

  • Scanner Review: HP Scanjet Professional 1000

    Your notebook computer's newest companion: a $249 ultraportable scanner that takes next to no briefcase space to turn double-sided documents, images, and business cards into PDFs, e-mails, and Outlook entries. We put the peripatetic peripheral to the test.

    Read the article

  • What have my fellow Delphi programmers done to make Eclipse/Java more like Delphi?

    - by Robert Oschler
    I am a veteran Delphi programmer working on my first real Android app. I am using Eclipse and Java as my development environment. The thing I miss the most of course is Delphi's VCL components and the associated IDE tools for design-time editing and code creation. Fortunately I am finding Eclipse to be one hell of an IDE with it's lush context sensitive help, deep auto-complete and code wizard facilities, and other niceties. This is a huge double treat since it is free. However, here is an example of something in the Eclipse/Java environment that will give a Delphi programmer pause. I will use the simple case of adding an "on-click" code stub for an OK button. DELPHI Drop button on a form Double-click button on form and fill in the code that will fire when the button is clicked ECLIPSE Drop button on layout in the graphical XML file editor Add the View.OnClickListener interface to the containing class's "implements" list if not there already. (Command+1 on Macs, Ctrl + 1 on PCs I believe). Use Eclipse to automatically add the code stub for unimplemented methods needed to support the View.OnClickListener interface, thus creating the event handler function stub. Find the stub and fill it in. However, if you have more than one possible click event source then you will need to inspect the View parameter to see which View element triggered the OnClick() event, thus requiring a case statement to handle multiple click event sources. NOTE: I am relatively new to Eclipse/Java so if there is a much easier way of doing this please let me know. Now that work flow isn't all that terrible, but again, that's just the simplest of use cases. Ratchet up the amount of extra work and thinking for a more complex component (aka widget) and the large number of properties/events it might have. It won't be long before you miss dearly the Delphi intelligent property editor and other designers. Eclipse tries to cover this ground by having an extensive list of properties in the menu that pops up when you right-click over a component/widget in the XML graphical layout editor. That's a huge and welcome assist but it's just not even close to the convenience of the Delphi IDE. Let me be very clear. I absolutely am not ranting nor do I want to start a Delphi vs. Java ideology discussion. Android/Eclipse/Java is what it is and there is a lot that impresses me. What I want to know is what other Delphi programmers that made the switch to the Eclipse/Java IDE have done to make things more Delphi like, and not just to make component/widget event code creation easier but any programming task. For example: Clever tips/tricks Eclipse plugins you found other ideas? Any great blog posts or web resources on the topic are appreciated too. -- roschler

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • JavaFX, Google Maps, and NetBeans Platform

    - by Geertjan
    Thanks to a great new article by Rob Terpilowski, and other work and research he describes in that article, it's now trivial to introduce a map component to a NetBeans Platform application. Making use of the GMapsFX library, as described in Rob's article, which provides a JavaFX API for Google Maps, you can very quickly knock this application together. Click to enlarge the image. Here's all the code (from Rob's article): @TopComponent.Description( preferredID = "MapTopComponent", persistenceType = TopComponent.PERSISTENCE_ALWAYS ) @TopComponent.Registration(mode = "editor", openAtStartup = true) @ActionID(category = "Window", id = "org.map.MapTopComponent") @ActionReference(path = "Menu/Window" /*, position = 333 */) @TopComponent.OpenActionRegistration( displayName = "#CTL_MapWindowAction", preferredID = "MapTopComponent" ) @NbBundle.Messages({ "CTL_MapWindowAction=Map", "CTL_MapTopComponent=Map Window", "HINT_MapTopComponent=This is a Map window" }) public class MapWindow extends TopComponent implements MapComponentInitializedListener { protected GoogleMapView mapComponent; protected GoogleMap map; private static final double latitude = 52.3667; private static final double longitude = 4.9000; public MapWindow() { setName(Bundle.CTL_MapTopComponent()); setToolTipText(Bundle.HINT_MapTopComponent()); setLayout(new BorderLayout()); JFXPanel panel = new JFXPanel(); Platform.setImplicitExit(false); Platform.runLater(() -> { mapComponent = new GoogleMapView(); mapComponent.addMapInializedListener(this); BorderPane root = new BorderPane(mapComponent); Scene scene = new Scene(root); panel.setScene(scene); }); add(panel, BorderLayout.CENTER); } @Override public void mapInitialized() { //Once the map has been loaded by the Webview, initialize the map details. LatLong center = new LatLong(latitude, longitude); MapOptions options = new MapOptions(); options.center(center) .mapMarker(true) .zoom(9) .overviewMapControl(false) .panControl(false) .rotateControl(false) .scaleControl(false) .streetViewControl(false) .zoomControl(false) .mapType(MapTypeIdEnum.ROADMAP); map = mapComponent.createMap(options); //Add a couple of markers to the map. MarkerOptions markerOptions = new MarkerOptions(); LatLong markerLatLong = new LatLong(latitude, longitude); markerOptions.position(markerLatLong) .title("My new Marker") .animation(Animation.DROP) .visible(true); Marker myMarker = new Marker(markerOptions); MarkerOptions markerOptions2 = new MarkerOptions(); LatLong markerLatLong2 = new LatLong(latitude, longitude); markerOptions2.position(markerLatLong2) .title("My new Marker") .visible(true); Marker myMarker2 = new Marker(markerOptions2); map.addMarker(myMarker); map.addMarker(myMarker2); //Add an info window to the Map. InfoWindowOptions infoOptions = new InfoWindowOptions(); infoOptions.content("<h2>Center of the Universe</h2>") .position(center); InfoWindow window = new InfoWindow(infoOptions); window.open(map, myMarker); } } Awesome work Rob, will be useful for many developers out there.

    Read the article

  • How to install Eclipse J2EE IDE from a tarball?

    - by Silambarasan
    I have downloaded Eclipse 3.6.1 as a tar.gz file from eclipse site. Then I extract using cmd: tar -zxvf eclipse-jee-helios-SR1-linux-gtk.tar.gz after execute this cmd I got eclipse folder in this there is eclipse file. When I double click on this eclipse file I'm getting following error: Could not display "/media/D-DEVELOPME/eclipse/eclipse". There is no application installed for executable files is there any solution for it?

    Read the article

  • Play Thousands of Online Radio Stations with Shoutcast in VLC

    - by DigitalGeekery
    Are you looking for more variety from your radio stations? Today we’ll take a look at how to easily stream thousands of radio stations to your desktop with VLC media player. Getting Started Select Media from the menu, go to Services Discovery, and click Shoutcast radio listings.   Next, select View from the menu and click Playlist.   Or, click on the Show Playlist button In the Playlist window, click on Shoutcast radio listings in the left pane. You should then see a very long list of Titles displayed on the right. Scroll though the list to find a music genre or topic that interests you. Double-click to expand the list of station options. Select one of the channel listings from the list and double click to begin playing.   Looking for a specific station? Type a search term into the search filter box to see if it is available.   That’s it. Sit back and enjoy listening to your favorite Internet radio programming. If you are a music or talk radio fan, you aren’t likely to run out of listening options in VLC. Want to find some more uses for VLC? Check out our articles on how to copy a DVD, convert video files to MP3, and how to set a video as your desktop wallpaper. Download VLC Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen to Local FM Radio in Windows 7 Media CenterListen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer RadioWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • SSAO Distortion

    - by Robert Xu
    I'm currently (attempting) to add SSAO to my engine, except it's...not really work, to say the least. I use a deferred renderer to render my scene. I have four render targets: Albedo, Light, Normal, and Depth. Here are the parameters for all of them (Surface Format, Depth Format): Albedo: 32-bit ARGB, Depth24Stencil8 Light: 32-bit ARGB, None Normal: 32-bit ARGB, None Depth: 8-bit R (Single), Depth24Stencil8 To generate my random noise map for the SSAO, I do the following for each pixel in the noise map: Vector3 v3 = Vector3.Zero; double z = rand.NextDouble() * 2.0 - 1.0; double r = Math.Sqrt(1.0 - z * z); double angle = rand.NextDouble() * MathHelper.TwoPi; v3.X = (float)(r * Math.Cos(angle)); v3.Y = (float)(r * Math.Sin(angle)); v3.Z = (float)z; v3 += offset; v3 *= 0.5f; result[i] = new Color(v3); This is my GBuffer rendering effect: PixelInput RenderGBufferColorVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = mul(input.Position, WorldViewProjection); pi.Normal = mul(input.Normal, WorldInverseTranspose); pi.Color = input.Color; pi.TPosition = pi.Position; pi.WPosition = input.Position; return pi; } GBufferTarget RenderGBufferColorPixelShader(PixelInput input) { GBufferTarget output = ( GBufferTarget ) 0; float3 position = input.TPosition.xyz / input.TPosition.w; output.Albedo = lerp(float4(1.0f, 1.0f, 1.0f, 1.0f), input.Color, ColorFactor); output.Normal = EncodeNormal(input.Normal); output.Depth = position.z; return output; } And here is the SSAO effect: float4 EncodeNormal(float3 normal) { return float4((normal.xyz * 0.5f) + 0.5f, 0.0f); } float3 DecodeNormal(float4 encoded) { return encoded * 2.0 - 1.0f; } float Intensity; float Size; float2 NoiseOffset; float4x4 ViewProjection; float4x4 ViewProjectionInverse; texture DepthMap; texture NormalMap; texture RandomMap; const float3 samples[16] = { float3(0.01537562, 0.01389096, 0.02276565), float3(-0.0332658, -0.2151698, -0.0660736), float3(-0.06420016, -0.1919067, 0.5329634), float3(-0.05896204, -0.04509097, -0.03611697), float3(-0.1302175, 0.01034653, 0.01543675), float3(0.3168565, -0.182557, -0.01421785), float3(-0.02134448, -0.1056605, 0.00576055), float3(-0.3502164, 0.281433, -0.2245609), float3(-0.00123525, 0.00151868, 0.02614773), float3(0.1814744, 0.05798516, -0.02362876), float3(0.07945167, -0.08302628, 0.4423518), float3(0.321987, -0.05670302, -0.05418307), float3(-0.00165138, -0.00410309, 0.00537362), float3(0.01687791, 0.03189049, -0.04060405), float3(-0.04335613, -0.00530749, 0.06443053), float3(0.8474263, -0.3590308, -0.02318038), }; sampler DepthSampler = sampler_state { Texture = DepthMap; MipFilter = Point; MinFilter = Point; MagFilter = Point; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler NormalSampler = sampler_state { Texture = NormalMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; }; sampler RandomSampler = sampler_state { Texture = RandomMap; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; struct VertexInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; struct PixelInput { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; }; PixelInput SSAOVertexShader(VertexInput input) { PixelInput pi = ( PixelInput ) 0; pi.Position = input.Position; pi.TextureCoordinates = input.TextureCoordinates; return pi; } float3 GetXYZ(float2 uv) { float depth = tex2D(DepthSampler, uv); float2 xy = uv * 2.0f - 1.0f; xy.y *= -1; float4 p = float4(xy, depth, 1); float4 q = mul(p, ViewProjectionInverse); return q.xyz / q.w; } float3 GetNormal(float2 uv) { return DecodeNormal(tex2D(NormalSampler, uv)); } float4 SSAOPixelShader(PixelInput input) : COLOR0 { float depth = tex2D(DepthSampler, input.TextureCoordinates); float3 position = GetXYZ(input.TextureCoordinates); float3 normal = GetNormal(input.TextureCoordinates); float occlusion = 1.0f; float3 reflectionRay = DecodeNormal(tex2D(RandomSampler, input.TextureCoordinates + NoiseOffset)); for (int i = 0; i < 16; i++) { float3 sampleXYZ = position + reflect(samples[i], reflectionRay) * Size; float4 screenXYZW = mul(float4(sampleXYZ, 1.0f), ViewProjection); float3 screenXYZ = screenXYZW.xyz / screenXYZW.w; float2 sampleUV = float2(screenXYZ.x * 0.5f + 0.5f, 1.0f - (screenXYZ.y * 0.5f + 0.5f)); float frontMostDepthAtSample = tex2D(DepthSampler, sampleUV); if (frontMostDepthAtSample < screenXYZ.z) { occlusion -= 1.0f / 16.0f; } } return float4(occlusion * Intensity * float3(1.0, 1.0, 1.0), 1.0); } technique SSAO { pass Pass0 { VertexShader = compile vs_3_0 SSAOVertexShader(); PixelShader = compile ps_3_0 SSAOPixelShader(); } } However, when I use the effect, I get some pretty bad distortion: Here's the light map that goes with it -- is the static-like effect supposed to be like that? I've noticed that even if I'm looking at nothing, I still get the static-like effect. (you can see it in the screenshot; the top half doesn't have any geometry yet it still has the static-like effect) Also, does anyone have any advice on how to effectively debug shaders?

    Read the article

  • Scan Your Thumb Drive for Viruses from the AutoPlay Dialog

    - by Mysticgeek
    It’s always a good idea to scan someone’s flash drive for viruses when you use it on your PC. Today we look at how to use Microsoft Security Essentials to scan thumb drives via the AutoPlay dialog. Editor Note: This technique was created by our friend Ramesh Srinivasan from the winhelponline tech blog. If you haven’t done so already, download and install Microsoft Security Essentials (link below), which has earned the How-To Geek official endorsement. Next download the mseautoplay.zip (link below). Unzip the file to view its contents. Then move the msescan.vbs script file into the Windows directory. Next double-click on the mseautoplay.reg file… Click Yes to the warning dialog window asking if you’re sure you want to add to the registry. After it’s added you’ll get a confirmation message…click OK. Now when you pop in a thumb drive, when AutoPlay comes up you will have the options to scan it with MSE first. MSE starts the scan of the thumb drive…   You can use this to scan any removable media. Here is an example of the ability to scan a DVD with MSE before opening any files. You can also go into Control Panel and set it as a default option of AutoPlay. Open Control Panel, View by Large icons, and click on AutoPlay. Notice that now when you go to change the default options for different types of media, Scanning with MSE is now included in the dropdown lists. Remove Settings If you want to remove the MSE AutoPlay Handler, Ramesh was kind enough to create an undo registry file. Double-click on undo.reg from the original MSE AutoPlay folder and click yes to the message to remove the setting.   Then you will need to go into the Windows directory and manually delete the msescan.vbs script file. This is an awesome trick which will allow you to scan your thumb drives and other removable media from the AutoPlay dialog. We tested it out on XP, Vista, and Windows 7 and it works perfectly on each one. Download mseautoplay.zip Download Microsoft Security Essentials Read Our Review of MSE Similar Articles Productive Geek Tips Disable AutoPlay in Windows VistaFind Your Missing USB Drive on Windows XPDisable Autoplay of Audio CDs and USB DrivesHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Binary on the Coat of Arms of the Governor General of Canada

    - by user132636
    Can you help me further this investigation? Here is about 10% of the work I have done on it. I present it only to see if there are any truly curious people among you. I made a video a few weeks ago showing some strange things about the Governor General's Coat of Arms and the binary on it. Today, I noticed something kinda cool and thought I would share. Here is the binary as it appears on the COA: 110010111001001010100100111010011 As DEC: 6830770643 (this is easily found on the web) Take a close look at that number. What do you notice about it? It has a few interesting features, but here is the one no one has pointed out... Split it down the middle and you have 68307 70643. The first digit is double the value of the last digit. The second digit is double the second last digit. The third digit is half of the third to last digit. And the middle ones are even or neutral. At first, I thought of it as energy. ++-nnnn+-- But actually you can create something else with it using the values. 221000211. See how that works. You may be asking why that is significant. Bare with me. I know 99% are rolling their eyes. 221000211 as base3 gives you this as binary: 100011101000111 100011101000111 as HEX is 4747, which converts to "GG". Initials of Governor General. GG.ca is his website. When you convert to base 33 (there are 33 digits in the original code) you get "GOV" Interesting? :D There is a lot more to it. I'll continue to show some strange coincidences if anyone is interested. Sorry if I am not explaining this correctly. By now you have probably figured out that I have no background in this. Which is why I am here. Thank you.

    Read the article

  • Grab your popcorn and watch the latest AutoVue Movie. Now available on YouTube!

    - by [email protected]
    Just released is: Oracle's AutoVue Visualization Solutions and Primavera P6 integration Movie Watch it now (9:24). This is sure to be a box office sensation. And if you have time for a double, triple or even quadruple feature don't forget these other AutoVue movies available on YouTube: AutoVue Work Online and Offline movie Watch it now (5:01). AutoVue 3D Walkthrough movie Watch it now (6:01). AutoVue 2D Compare Movie Watch it now (4:14). Enjoy the movies.

    Read the article

  • Firefox application associations not working

    - by Pavlos G.
    No matter what changes i make to file associations (actions) in the 'Applications' tab in firefox, they're totally ignored. For example, i set .wmv and .avi files to open with 'smplayer' but when i download a file and double-click on it (through the 'Downloads' window), it keeps opening with Totem player. I've tried to delete and recreate mimetypes.rdf but that didn't help. Any ideas on what else should i check?

    Read the article

  • Implementing camera for 2d side scroller game ?

    - by Mr.Gando
    Hello, I'm implementing a 2D side scroller for iOS (using C/C++ with OpenGL) (beat'em up style like double dragon/final fight ). My scenes are composed of one cyclical background image ( the end of the image connects perfectly with the beginning ). This is to produce a cyclical scroll effect. I was wondering how could I implement a camera that follows my player movement ? ( Resources / Links are greatly appreciated with explanations :) )

    Read the article

  • Delete Perl install files?

    - by Bjorn
    I have a VPS that is managed and am mysteriously running out of space, I think I found a few files I can delete. So just double checking it's ok to delete the Perl install files: /root/perl588installer.tar.gz (pretty sure this can go) /root/perl588installer/ (wasn't sure if this can go, I'm thinking it's just used when perl is installed) I rarely install this kind of thing myself but when I do I'm sure you can delete these files. Thanks

    Read the article

  • What management/development practices do you change when a team of 1-3 developers grows to 10+?

    - by Mag20
    My team built a website for a client several years ago. The site taffic has been growing very quickly and our client has been asking us to grow our team to fill their maintenance and feature request needs. We started with a small number of developers, and our team has grown - now we're in the double digits. What management/development changes are the most beneficial when team grows from small "garage-size" team to 10+ developers?

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >