Search Results

Search found 41483 results on 1660 pages for 'list style'.

Page 375/1660 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • How to Create a Send/Receive Group for RSS Feeds in Outlook 2013

    - by Lori Kaufman
    If you choose to manually update your RSS feeds on demand, there is a way to do this without having to send and receive your email at the same time. You can create a special Send/Receive Group for your RSS feeds. NOTE: If you choose to not have your RSS feeds updated automatically, creating a separate Send/Receive Group for your RSS feeds is useful so you can update them when you want to. To begin creating a new Send/Receive Group, click the File tab. Click Options in the menu on the left side of the Account Information screen. On the Outlook Options dialog box, click Advanced in the left pane list of menu options. In the right pane, scroll down to the Send and receive section and click the Send/Receive button. On the Send/Receive Groups dialog box, click New next to the list of groups. On the Send/Receive Group Name dialog box, enter a name, such as “RSS Feeds On Demand Only,” in the edit box and click OK. For all the other Accounts, except RSS, in the list on the left, de-select the Include RSS Feeds in this Send/Receive group check box so there is NO check mark in the box. Click RSS under Accounts, and make sure the Include RSS Feeds in this Send/Receive group check box is selected. NOTE: If you want to have a separate Send/Receive group for each RSS Feed or group certain RSS feeds together, you can turn on and off specific feeds in the lower half of the Send/Receive Settings dialog box. If you decide to do this, you might specify a more appropriate name for each Send/Receive group for the RSS feeds. Click OK to accept your changes and close the Send/Receive dialog box. Make sure your new Send/Receive group is selected in the list of groups on the Send/Receive Groups dialog box. De-select all the options under Setting for group section at the bottom of the dialog box and click Close. This prevents this group from being updated when you click the general Send/Receive button to retrieve your email. Click OK on the Outlook Options dialog box. To manually update your RSS feeds, click the Send / Receive tab. Click Send/Receive Groups and select your new group from the drop-down list. You can change, rename, or remove any Send/Receive Groups you create by accessing the Send/Receive Groups dialog box again.     

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • Multithreading: Communication from Parent thread to child thread

    - by Dennis Nowland
    I have a List of threads normally 3 threads each of the threads reference a webbrowser control that communicates with the parent control to populate a datagridview. What I need to do is when the user clicks the button in a datagridviewButtonCell corresponding data will be sent back to the webbrowser control within the child thread that originally communicated with the main thread. but when I try to do this I receive the following error message 'COM object that has been separated from its underlying RCW cannot be used.' my problem is that I can not figure out how to reference the relevant webbrowser control. I would appreciate any help that anyone can give me. The language used is c# winforms .Net 4.0 targeted Code sample: The following code is executed when user click on the Start button in the main thread private void StartSubmit(object idx) { /* method used by the new thread to initialise a 'myBrowser' inherited from the webbrowser control each submitters object is an a custom Control called 'myBrowser' which holds detail about the function of the object eg: */ //index: is an integer value which represents the threads id int index = (int)idx; //submitters[index] is an instance of the 'myBrowser' control submitters[index] = new myBrowser(); //threads integer id submitters[index]._ThreadNum = index; // naming convention used 'browser' +the thread index submitters[index].Name = "browser" + index; //set list in 'myBrowser' class to hold a copy of the list found in the main thread submitters[index]._dirs = dirLists[index]; // suppress and javascript errors the may occur in the 'myBrowser' control submitters[index].ScriptErrorsSuppressed = true; //execute eventHandler submitters[index].DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(DocumentCompleted); //advance to the next un-opened address in datagridview the navigate the that address //in the 'myBrowser' control. SetNextDir(submitters[index]); } private void btnStart_Click(object sender, EventArgs e) { // used to fill list<string> for use in each thread. fillDirs(); //connections is the list<Thread> holding the thread that have been opened //1 to 10 maximum for (int n = 0; n < (connections.Length); n++) { //initialise new thread to the StartSubmit method passing parameters connections[n] = new Thread(new ParameterizedThreadStart(StartSubmit)); // naming convention used conn + the threadIndex ie: 'conn1' to 'conn10' connections[n].Name = "conn" + n.ToString(); // due to the webbrowser control needing to be ran in the single //apartment state connections[n].SetApartmentState(ApartmentState.STA); //start thread passing the threadIndex connections[n].Start(n); } } Once the 'myBrowser' control is fully loaded I am inserting form data into webforms found in webpages loaded via data enter into rows found in the datagridview. Once a user has entered the relevant details into the different areas in the row the can then clicking a DataGridViewButtonCell that has tha collects the data entered and then has to be send back to the corresponding 'myBrowser' object that is found on a child thread. Thank you

    Read the article

  • Single Key Multiple Values Data Structure for one to many mapping

    - by nijhawan.saurabh
    Dictionaries are good, they are great to store Key / Value pairs but what if you want to store multiple values for a single key? Dictionaries would not allow duplicate keys. I came across a nice way to represent such a Data Structure using one of the Extension Method (ToLookup) present in System.Linq Namespace which converts an IEnumerable<T> to an ILookup<TKey, TElement>.   Now, there are two parameters this method expects (The other overload expects 3 parameters): IEnumerable<TSource> - This list would contain the actual data. Func<TSource, TKey> keySelector - The Delegate which which computes the keys   The method returns the following: ILookup<TKey, TElement>   This DS would store Keys and multiple values along those keys.   Let's see a small example:        12  using System;    13     using System.Collections.Generic;    14     using System.Linq;    15     16     /// <summary>    17     /// </summary>    18     internal class Program    19     {    20         #region Methods    21     22         /// <summary>    23         /// </summary>    24         /// <param name="args">    25         /// The args.    26         /// </param>    27         private static void Main(string[] args)    28         {    29             // Create an array of strings.    30             var list = new List<string> { "IceCream1", "Chocolate Moose", "IceCream2" };    31     32             // Generate a lookup Data Structure    33             ILookup<int, string> lookupDs = list.ToLookup(item => item.Length);    34     35           // Enumerate groupings.    36             foreach (var group in lookupDs)    37             {    38                 foreach (string element in group)    39                 {    40                     Console.WriteLine(element);    41                 }    42             }    43         } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • Correct syntax in stored procedure and method using MsSqlProvider.ExecProcedure? [migrated]

    - by Dudi
    I have problem with ASP.net and database prcedure My procedure in mssql base USE [dbase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[top1000] @Published datetime output, @Title nvarchar(100) output, @Url nvarchar(1000) output, @Count INT output AS SET @Published = (SELECT TOP 1000 dbo.vst_download_files.dfl_date_public FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC ) SET @Title = (SELECT TOP 1000 dbo.vst_download_files.dfl_name FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) SET @Url = (SELECT TOP 1000 dbo.vst_download_files.dfl_source_url FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) SET @Count = (SELECT TOP 1000 dbo.vst_download_files.dfl_download_count FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) And my proceduer in website project public static void Top1000() { List<DownloadFile> List = new List<DownloadFile>(); SqlDataReader dbReader; SqlParameter published = new SqlParameter("@Published", SqlDbType.DateTime2); published.Direction = ParameterDirection.Output; SqlParameter title = new SqlParameter("@Title", SqlDbType.NVarChar); title.Direction = ParameterDirection.Output; SqlParameter url = new SqlParameter("@Url", SqlDbType.NVarChar); url.Direction = ParameterDirection.Output; SqlParameter count = new SqlParameter("@Count", SqlDbType.Int); count.Direction = ParameterDirection.Output; SqlParameter[] parm = {published, title, count}; dbReader = MsSqlProvider.ExecProcedure("top1000", parm); try { while (dbReader.Read()) { DownloadFile df = new DownloadFile(); //df.AddDate = dbReader["dfl_date_public"]; df.Name = dbReader["dlf_name"].ToString(); df.SourceUrl = dbReader["dlf_source_url"].ToString(); df.DownloadCount = Convert.ToInt32(dbReader["dlf_download_count"]); List.Add(df); } XmlDocument top1000Xml = new XmlDocument(); XmlNode XMLNode = top1000Xml.CreateElement("products"); foreach (DownloadFile df in List) { XmlNode productNode = top1000Xml.CreateElement("product"); XmlNode publishedNode = top1000Xml.CreateElement("published"); publishedNode.InnerText = "data dodania"; XMLNode.AppendChild(publishedNode); XmlNode titleNode = top1000Xml.CreateElement("title"); titleNode.InnerText = df.Name; XMLNode.AppendChild(titleNode); } top1000Xml.AppendChild(XMLNode); top1000Xml.Save("\\pages\\test.xml"); } catch { } finally { dbReader.Close(); } } And if I made to MsSqlProvider.ExecProcedure("top1000", parm); I got String[1]: property Size has invalid size of 0. Where I shoudl look for solution? Procedure or method?

    Read the article

  • JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes

    - by John-Brown.Evans
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c17_6{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c5_6{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_6{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_6{background-color:#ffffff} .c10_6{color:#1155cc;text-decoration:underline} .c1_6{text-align:center;direction:ltr} .c0_6{line-height:1.0;direction:ltr} .c16_6{color:#666666;font-size:12pt} .c18_6{color:inherit;text-decoration:inherit} .c8_6{background-color:#f3f3f3} .c2_6{direction:ltr} .c14_6{font-size:8pt} .c11_6{font-size:10pt} .c7_6{font-weight:bold} .c12_6{height:0pt} .c3_6{height:11pt} .c13_6{border-collapse:collapse} .c4_6{font-family:"Courier New"} .c9_6{font-style:italic} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite. If you have not already done so, I recommend you look at the previous posts in this series, as they include steps which this example builds upon. The following examples will demonstrate how to write and read from the queue from a SOA process. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we wrote and deployed BPEL composites, which enqueued and dequeued a simple XML payload. AQ JMS allows you to interoperate with database Advanced Queueing via JMS in WebLogic server and therefore take advantage of database features, while maintaining compliance with the JMS architecture. AQ JMS uses the WebLogic JMS Foreign Server framework. A full description of this functionality can be found in the following Oracle documentation Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 7. Interoperating with Oracle AQ JMS http://docs.oracle.com/cd/E23943_01/web.1111/e13738/aq_jms.htm#CJACBCEJ For easier reference, this sample will use the same names for the objects as in the above document, except for the name of the database user, as it is possible that this user already exists in your database. We will create the following objects Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2. Create a Database User and Advanced Queue The following steps can be executed in the database client of your choice, e.g. JDeveloper or SQL Developer. The examples below use SQL*Plus. Log in to the database as a DBA user, for example SYSTEM or SYS. Create the AQJMSUSER user and grant privileges to enable the user to create AQ objects. Create Database User and Grant AQ Privileges sqlplus system/password as SYSDBA GRANT connect, resource TO aqjmsuser IDENTIFIED BY aqjmsuser; GRANT aq_user_role TO aqjmsuser; GRANT execute ON sys.dbms_aqadm TO aqjmsuser; GRANT execute ON sys.dbms_aq TO aqjmsuser; GRANT execute ON sys.dbms_aqin TO aqjmsuser; GRANT execute ON sys.dbms_aqjms TO aqjmsuser; Create the Queue Table and Advanced Queue and Start the AQ The following commands are executed as the aqjmsuser database user. Create the Queue Table connect aqjmsuser/aqjmsuser; BEGIN dbms_aqadm.create_queue_table ( queue_table = 'myQueueTable', queue_payload_type = 'sys.aq$_jms_text_message', multiple_consumers = false ); END; / Create the AQ BEGIN dbms_aqadm.create_queue ( queue_name = 'userQueue', queue_table = 'myQueueTable' ); END; / Start the AQ BEGIN dbms_aqadm.start_queue ( queue_name = 'userQueue'); END; / The above commands can be executed in a single PL/SQL block, but are shown as separate blocks in this example for ease of reference. You can verify the queue by executing the SQL command SELECT object_name, object_type FROM user_objects; which should display the following objects: OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- SYS_C0056513 INDEX SYS_LOB0000170822C00041$$ LOB SYS_LOB0000170822C00040$$ LOB SYS_LOB0000170822C00037$$ LOB AQ$_MYQUEUETABLE_T INDEX AQ$_MYQUEUETABLE_I INDEX AQ$_MYQUEUETABLE_E QUEUE AQ$_MYQUEUETABLE_F VIEW AQ$MYQUEUETABLE VIEW MYQUEUETABLE TABLE USERQUEUE QUEUE Similarly, you can view the objects in JDeveloper via a Database Connection to the AQJMSUSER. 3. Configure WebLogic Server and Add JMS Objects All these steps are executed from the WebLogic Server Administration Console. Log in as the webLogic user. Configure a WebLogic Data Source The data source is required for the database connection to the AQ created above. Navigate to domain > Services > Data Sources and press New then Generic Data Source. Use the values:Name: aqjmsuserDataSource JNDI Name: jdbc/aqjmsuserDataSource Database type: Oracle Database Driver: *Oracle’ Driver (Thin XA) for Instance connections; Versions:9.0.1 and later Connection Properties: Enter the connection information to the database containing the AQ created above and enter aqjmsuser for the User Name and Password. Press Test Configuration to verify the connection details and press Next. Target the data source to the soa server. The data source will be displayed in the list. It is a good idea to test the data source at this stage. Click on aqjmsuserDataSource, select Monitoring > Testing > soa_server1 and press Test Data Source. The result is displayed at the top of the page. Configure a JMS System Module The JMS system module is required to host the JMS foreign server for AQ resources. Navigate to Services > Messaging > JMS Modules and select New. Use the values: Name: AqJmsModule (Leave Descriptor File Name and Location in Domain empty.) Target: soa_server1 Click Finish. The other resources will be created in separate steps. The module will be displayed in the list.   Configure a JMS Foreign Server A foreign server is required in order to reference a 3rd-party JMS provider, in this case the database AQ, within a local WebLogic server JNDI tree. Navigate to Services > Messaging > JMS Modules and select (click on) AqJmsModule to configure it. Under Summary of Resources, select New then Foreign Server. Name: AqJmsForeignServer Targets: The foreign server is targeted automatically to soa_server1, based on the JMS module’s target. Press Finish to create the foreign server. The foreign server resource will be listed in the Summary of Resources for the AqJmsModule, but needs additional configuration steps. Click on AqJmsForeignServer and select Configuration > General to complete the configuration: JNDI Initial Context Factory: oracle.jms.AQjmsInitialContextFactory JNDI Connection URL: <empty> JNDI Properties Credential:<empty> Confirm JNDI Properties Credential: <empty> JNDI Properties: datasource=jdbc/aqjmsuserDataSource This is an important property. It is the JNDI name of the data source created above, which points to the AQ schema in the database and must be entered as a name=value pair, as in this example, e.g. datasource=jdbc/aqjmsuserDataSource, including the “datasource=” property name. Default Targeting Enabled: Leave this value checked. Press Save to save the configuration. At this point it is a good idea to verify that the data source was written correctly to the config file. In a terminal window, navigate to $MIDDLEWARE_HOME/user_projects/domains/soa_domain/config/jms  and open the file aqjmsmodule-jms.xml . The foreign server configuration should contain the datasource name-value pair, as follows:   <foreign-server name="AqJmsForeignServer">         <default-targeting-enabled>true</default-targeting-enabled>         <initial-context-factory>oracle.jms.AQjmsInitialContextFactory</initial-context-factory>         <jndi-property>           <key> datasource </key>           <value> jdbc/aqjmsuserDataSource </value>         </jndi-property>   </foreign-server> </weblogic-jms> Configure a JMS Foreign Server Connection Factory When creating the foreign server connection factory, you enter local and remote JNDI names. The name of the connection factory itself and the local JNDI name are arbitrary, but the remote JNDI name must match a specific format, depending on the type of queue or topic to be accessed in the database. This is very important and if the incorrect value is used, the connection to the queue will not be established and the error messages you get will not immediately reflect the cause of the error. The formats required (Remote JNDI names for AQ JMS Connection Factories) are described in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In this example, the remote JNDI name used is   XAQueueConnectionFactory  because it matches the AQ and data source created earlier, i.e. thin with AQ. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories then New.Name: AqJmsForeignServerConnectionFactory Local JNDI Name: AqJmsForeignServerConnectionFactory Note: this local JNDI name is the JNDI name which your client application, e.g. a later BPEL process, will use to access this connection factory. Remote JNDI Name: XAQueueConnectionFactory Press OK to save the configuration. Configure an AQ JMS Foreign Server Destination A foreign server destination maps the JNDI name on the foreign JNDI provider to the respective local JNDI name, allowing the foreign JNDI name to be accessed via the local server. As with the foreign server connection factory, the local JNDI name is arbitrary (but must be unique), but the remote JNDI name must conform to a specific format defined in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In our example, the remote JNDI name is Queues/USERQUEUE , because it references a queue (as opposed to a topic) with the name USERQUEUE. We will name the local JNDI name queue/USERQUEUE, which is a little confusing (note the missing “s” in “queue), but conforms better to the JNDI nomenclature in our SOA server and also allows us to differentiate between the local and remote names for demonstration purposes. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Destinations and select New.Name: AqJmsForeignDestination Local JNDI Name: queue/USERQUEUE Remote JNDI Name:Queues/USERQUEUE After saving the foreign destination configuration, this completes the JMS part of the configuration. We still need to configure the JMS adapter in order to be able to access the queue from a BPEL processt. 4. Create a JMS Adapter Connection Pool in Weblogic Server Create the Connection Pool Access to the AQ JMS queue from a BPEL or other SOA process in our example is done via a JMS adapter. To enable this, the JmsAdapter in WebLogic server needs to be configured to have a connection pool which points to the local connection factory JNDI name which was created earlier. Navigate to Deployments > Next and select (click on) the JmsAdapter. Select Configuration > Outbound Connection Pools and New. Check the radio button for oracle.tip.adapter.jms.IJmsConnectionFactory and press Next. JNDI Name: eis/aqjms/UserQueue Press Finish Expand oracle.tip.adapter.jms.IJmsConnectionFactory and click on eis/aqjms/UserQueue to configure it. The ConnectionFactoryLocation must point to the foreign server’s local connection factory name created earlier. In our example, this is AqJmsForeignServerConnectionFactory . As a reminder, this connection factory is located under JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories and the value needed here is under Local JNDI Name. Enter AqJmsForeignServerConnectionFactory  into the Property Value field for ConnectionFactoryLocation. You must then press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console.Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes. Redeploy the JmsAdapter Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button. On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the AQ JMS queue. You can verify that the JNDI name was created correctly, by navigating to Environment > Servers > soa_server1 and View JNDI Tree. Then scroll down in the JNDI Tree Structure to eis and select aqjms. This concludes the sample. In the following post, I will show you how to create a BPEL process which sends a message to this advanced queue via JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

    - by John-Brown.Evans
    JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process ol{margin:0;padding:0} .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} .c4_7{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c3_7{vertical-align:top;width:234pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_7{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c16_7{background-color:#ffffff;padding:0pt 0pt 0pt 0pt} .c0_7{height:11pt;direction:ltr} .c9_7{color:#1155cc;text-decoration:underline} .c17_7{color:inherit;text-decoration:inherit} .c5_7{direction:ltr} .c18_7{background-color:#ffff00} .c2_7{background-color:#f3f3f3} .c14_7{height:0pt} .c8_7{text-indent:36pt} .c11_7{text-align:center} .c7_7{font-style:italic} .c1_7{font-family:"Courier New"} .c13_7{line-height:1.0} .c15_7{border-collapse:collapse} .c12_7{font-weight:bold} .c10_7{font-size:8pt} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes This example demonstrates how to write a simple message to an Oracle AQ via the the WebLogic AQ JMS functionality from a BPEL process and a JMS adapter. If you have not yet reviewed the previous posts, please do so first, especially the JMS Step 6 post, as this one references objects created there. 1. Recap and Prerequisites In the previous example, we created an Oracle Advanced Queue (AQ) and some related JMS objects in WebLogic Server to be able to access it via JMS. Here are the objects which were created and their names and JNDI names: Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2 . Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will write a simple XML message to the AQ JMS queue via the JMS adapter, based on the following XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                xmlns="http://www.example.org"                targetNamespace="http://www.example.org"                elementFormDefault="qualified">  <xsd:element name="exampleElement" type="xsd:string">  </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project   JmsAdapterWriteAqJms  and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and select SOA Tier > SOA Project as its type. Name it JmsAdapterWriteAqJms . When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteAqJms too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd  and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the XSD item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Advanced Queueing AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the connection factory created earlier is located. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Produce Operation Parameters Destination Name: Wait for the list to populate. (Only foreign servers are listed here, because Oracle Advanced Queuing was selected earlier, in step 3) .         Select the foreign server destination created earlier, AqJmsForeignDestination (queue) . This will automatically populate the Destination Name field with the name of the foreign destination, queue/USERQUEUE . JNDI Name: The JNDI name to use for the JMS connection. This is the JNDI name of the connection pool created in the WebLogic Server.JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime. In our example, this is the value eis/aqjms/UserQueue Messages URL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement : string . Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow.   This completes the steps at the composite level. 3. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 4. Compile and Deploy the Composite Compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ----  Deployment finished.  ---- in the Deployment frame, if the deployment was successful. 5. Test the Composite Execute a Test Instance In a browser, log in to the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation. Navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite) and click on  JmsAdapterWriteAqJms [1.0] , then press the Test button. Enter any string into the text input field, for example “Test message from JmsAdapterWriteAqJms” then press Test Web Service. If the instance is successful, you should see the same text you entered in the Response payload frame. Monitor the Advanced Queue The test message will be written to the advanced queue created at the top of this sample. To confirm it, log in to the database as AQJMSUSER and query the MYQUEUETABLE database table. For example, from a shell window with SQL*Plus sqlplus aqjmsuser/aqjmsuser SQL> SELECT user_data FROM myqueuetable; which will display the message contents, for example Similarly, you can use the JDeveloper Database Navigator to view the contents. Use a database connection to the AQJMSUSER and in the navigator, expand Queues Tables and select MYQUEUETABLE. Select the Data tab and scroll to the USER_DATA column to view its contents. This concludes this example. The following post will be the last one in this series. In it, we will learn how to read the message we just wrote using a BPEL process and AQ JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • MVVM/WPF: DataTemplate is not changed in Wizard

    - by msfanboy
    Hello, I wonder why my contentcontrol(headeredcontentcontrol) does not change the datatemplates when I press the previous/next button. While debugging everything seems ok means I jump forth and back the collection of wizardpages but always the first page is shown and its header text not the usercontrol is visible. What do I have forgotten? using System; using System.Collections.Generic; using System.Linq; using System.Text; using GalaSoft.MvvmLight.Command; using System.Collections.ObjectModel; using System.Diagnostics; using System.ComponentModel; namespace TBM.ViewModel { public class WizardMainViewModel { WizardPageViewModelBase _currentPage; ReadOnlyCollection _pages; RelayCommand _moveNextCommand; RelayCommand _movePreviousCommand; public WizardMainViewModel() { this.CurrentPage = this.Pages[0]; } public RelayCommand MoveNextCommand { get { return _moveNextCommand ?? (_moveNextCommand = new RelayCommand(() => this.MoveToNextPage(), () => this.CanMoveToNextPage)); } } public RelayCommand MovePreviousCommand { get { return _movePreviousCommand ?? (_movePreviousCommand = new RelayCommand( () => this.MoveToPreviousPage(), () => this.CanMoveToPreviousPage)); } } bool CanMoveToPreviousPage { get { return 0 < this.CurrentPageIndex; } } bool CanMoveToNextPage { get { return this.CurrentPage != null && this.CurrentPage.IsValid(); } } void MoveToPreviousPage() { this.CurrentPage = this.Pages[this.CurrentPageIndex - 1]; } void MoveToNextPage() { if (this.CurrentPageIndex < this.Pages.Count - 1) this.CurrentPage = this.Pages[this.CurrentPageIndex + 1]; } /// <summary> /// Returns the page ViewModel that the user is currently viewing. /// </summary> public WizardPageViewModelBase CurrentPage { get { return _currentPage; } private set { if (value == _currentPage) return; if (_currentPage != null) _currentPage.IsCurrentPage = false; _currentPage = value; if (_currentPage != null) _currentPage.IsCurrentPage = true; this.OnPropertyChanged("CurrentPage"); this.OnPropertyChanged("IsOnLastPage"); } } public bool IsOnLastPage { get { return this.CurrentPageIndex == this.Pages.Count - 1; } } /// <summary> /// Returns a read-only collection of all page ViewModels. /// </summary> public ReadOnlyCollection<WizardPageViewModelBase> Pages { get { return _pages ?? CreatePages(); } } ReadOnlyCollection<WizardPageViewModelBase> CreatePages() { WizardPageViewModelBase welcomePage = new WizardWelcomePageViewModel(); WizardPageViewModelBase schoolclassPage = new WizardSchoolclassSubjectPageViewModel(); WizardPageViewModelBase lessonPage = new WizardLessonTimesPageViewModel(); WizardPageViewModelBase timetablePage = new WizardTimeTablePageViewModel(); WizardPageViewModelBase finishPage = new WizardFinishPageViewModel(); var pages = new List<WizardPageViewModelBase>(); pages.Add(welcomePage); pages.Add(schoolclassPage); pages.Add(lessonPage); pages.Add(timetablePage); pages.Add(finishPage); return _pages = new ReadOnlyCollection<WizardPageViewModelBase>(pages); } int CurrentPageIndex { get { if (this.CurrentPage == null) { Debug.Fail("Why is the current page null?"); return -1; } return this.Pages.IndexOf(this.CurrentPage); } } public event PropertyChangedEventHandler PropertyChanged; void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = this.PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName)); } } } <UserControl x:Class="TBM.View.WizardMainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:ViewModel="clr-namespace:TBM.ViewModel" xmlns:View="clr-namespace:TBM.View" mc:Ignorable="d" > <UserControl.Resources> <DataTemplate DataType="{x:Type ViewModel:WizardWelcomePageViewModel}"> <View:WizardWelcomePageView /> </DataTemplate> <DataTemplate DataType="{x:Type ViewModel:WizardSchoolclassSubjectPageViewModel}"> <View:WizardSchoolclassSubjectPageView /> </DataTemplate> <DataTemplate DataType="{x:Type ViewModel:WizardLessonTimesPageViewModel}"> <View:WizardLessonTimesPageView /> </DataTemplate> <DataTemplate DataType="{x:Type ViewModel:WizardTimeTablePageViewModel}"> <View:WizardTimeTablePageView /> </DataTemplate> <DataTemplate DataType="{x:Type ViewModel:WizardFinishPageViewModel}"> <View:WizardFinishPageView /> </DataTemplate> <!-- This Style inherits from the Button style seen above. --> <Style BasedOn="{StaticResource {x:Type Button}}" TargetType="{x:Type Button}" x:Key="moveNextButtonStyle"> <Setter Property="Content" Value="Next" /> <Style.Triggers> <DataTrigger Binding="{Binding Path=IsOnLastPage}" Value="True"> <Setter Property="Content" Value="Finish}" /> </DataTrigger> </Style.Triggers> </Style> <ViewModel:WizardMainViewModel x:Key="WizardMainViewModelID" /> </UserControl.Resources> <Grid DataContext="{Binding ., Source={StaticResource WizardMainViewModelID}}" > <Grid.RowDefinitions> <RowDefinition Height="310*" /> <RowDefinition Height="51*" /> </Grid.RowDefinitions> <!-- CONTENT --> <Grid Grid.Row="0" Background="LightGoldenrodYellow"> <HeaderedContentControl Content="{Binding CurrentPage}" Header="{Binding Path=CurrentPage.DisplayName}" /> </Grid> <!-- NAVIGATION BUTTONS --> <Grid Grid.Row="1" Background="Aquamarine"> <StackPanel HorizontalAlignment="Center" Orientation="Horizontal"> <Button Command="{Binding MovePreviousCommand}" Content="Previous" /> <Button Command="{Binding MoveNextCommand}" Style="{StaticResource moveNextButtonStyle}" Content="Next" /> <Button Command="{Binding CancelCommand}" Content="Cancel" /> </StackPanel> </Grid> </Grid>

    Read the article

  • SVG as CSS background for website navigation-bar

    - by Irfan Mir
    I drew a small (horizontal / in width) svg to be the background of my website's navigation. My website's navigation takes place a 100% of the browser's viewport and I want the svg image to fill that 100% space. So, using css I set the background of the navigation (.nav) to nav.svg but then I saw (whenI opened the html file in a browser) that the svg was not the full-width of the nav, but at the small width I drew it at. How can I get the SVG to stretch and fill the entire width of the navigation (100% of the page) ? Here is the code for the html file where the navigation is in: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Distributed Horizontal Menu</title> <meta name="generator" content="PSPad editor, www.pspad.com"> <style type="text/css"> *{ margin:0; padding:0; } .nav { margin:0; padding:0; min-width:42em; width:100%; height:47px; overflow:hidden; background:transparent url(nav.svg) no-repeat; text-align:justify; font:bold 88%/1.1 verdana; } .nav li { display:inline; list-style:none; } .nav li.last { margin-right:100%; } .nav li a { display:inline-block; padding:13px 4px 0; height:31px; color:#fff; vertical-align:middle; text-decoration:none; } .nav li a:hover { color:#ff6; background:#36c; } @media screen and (max-width:322px){ /* styling causing first break will go here*/ /* but in the meantime, a test */ body{ background:#ff0000; } } </style></head><body> <ul class="nav"> <!--[test to comment out random items] <li>&nbsp; <a href="#">netscape&nbsp;9</a></li> [the spacing should be distributed]--> <li>&nbsp; <a href="#">internet&nbsp;explorer&nbsp;6-8</a></li> <li>&nbsp; <a href="#">opera&nbsp;10</a></li> <li>&nbsp; <a href="#">firefox&nbsp;3</a></li> <li>&nbsp; <a href="#">safari&nbsp;4</a></li> <li class="last">&nbsp; <a href="#">chrome&nbsp;2</a> &nbsp; &nbsp;</li> </ul> </body></html> and Here is the code for the svg: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="321.026px" height="44.398px" viewBox="39.487 196.864 321.026 44.398" enable-background="new 39.487 196.864 321.026 44.398" xml:space="preserve"> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="280" y1="316.8115" x2="280" y2="275.375" gradientTransform="matrix(1 0 0 1 -80 -77)"> <stop offset="0" style="stop-color:#5A4A6A"/> <stop offset="0.3532" style="stop-color:#605170"/> <stop offset="0.8531" style="stop-color:#726382"/> <stop offset="1" style="stop-color:#796A89"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M360,238.721c0,1.121-0.812,2.029-1.812,2.029H41.813c-1.001,0-1.813-0.908-1.813-2.029v-39.316 c0-1.119,0.812-2.027,1.813-2.027h316.375c1.002,0,1.812,0.908,1.812,2.027V238.721z"/> <path opacity="0.1" fill="#FFFFFF" enable-background="new " d="M358.188,197.376H41.813c-1.001,0-1.813,0.908-1.813,2.028 v39.316c0,1.12,0.812,2.028,1.813,2.028h316.375c1,0,1.812-0.908,1.812-2.028v-39.316C360,198.284,359.189,197.376,358.188,197.376z M358.75,238.721c0,0.415-0.264,0.779-0.562,0.779H41.813c-0.3,0-0.563-0.363-0.563-0.779v-39.316c0-0.414,0.263-0.777,0.563-0.777 h316.375c0.301,0,0.562,0.363,0.562,0.777V238.721z"/> <path opacity="0.5" fill="#FFFFFF" enable-background="new " d="M358.188,197.376H41.813c-1.001,0-1.813,0.908-1.813,2.028v1.461 c0-1.12,0.812-2.028,1.813-2.028h316.375c1.002,0,1.812,0.908,1.812,2.028v-1.461C360,198.284,359.189,197.376,358.188,197.376z"/> <g id="seperators"> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="104.5" y1="197.375" x2="104.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="103.5" y1="197.375" x2="103.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="105.5" y1="197.375" x2="105.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="167.5" y1="197.375" x2="167.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="166.5" y1="197.375" x2="166.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="168.5" y1="197.375" x2="168.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="231.5" y1="197.375" x2="231.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="232.5" y1="197.375" x2="232.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="230.5" y1="197.375" x2="230.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="295.5" y1="197.375" x2="295.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="294.5" y1="197.375" x2="294.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="296.5" y1="197.375" x2="296.5" y2="240.75"/> </g> <path fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" d="M360,238.721c0,1.121-0.812,2.029-1.812,2.029 H41.813c-1.001,0-1.813-0.908-1.813-2.029v-39.316c0-1.119,0.812-2.027,1.813-2.027h316.375c1.002,0,1.812,0.908,1.812,2.027 V238.721z"/> </svg> I appreciate and welcome any and all comments, help, and suggestions. Thanks in Advance!

    Read the article

  • Parser, send an argument/receive xml (receive already done/ send not)

    - by bruno
    public List<Afood> getFoodFromCat(String cat) { String resultado = ""; List<Afood> list = new ArrayList<Afood>(); try { URL xpto = new URL("http://10.0.2.2/webservice/nutrituga/get_food_by_cat.php"); HttpURLConnection conn; conn = (HttpURLConnection) xpto.openConnection(); conn.setDoInput(true); conn.connect(); InputStream is = conn.getInputStream(); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); try { DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); NodeList nl = doc.getElementsByTagName("item"); // resultado = String.valueOf(nl.getLength()); for (int i = 0; i < nl.getLength(); i++) { Node n = nl.item(i); Node childNode = n.getFirstChild(); while (childNode != null) { if (childNode.getNodeType() == Node.ELEMENT_NODE) { if (childNode.getNodeName().equalsIgnoreCase( "NAME_FOOD")) { Node valor = childNode.getFirstChild(); // resultado = resultado + valor.getNodeValue(); list.add(new Afood(valor.getNodeValue(), "", (int) Math.round(Math.random()), 1, 1, 1, 1, 1, 1)); } } childNode = childNode.getNextSibling(); } } return list; } catch (ParserConfigurationException e1) { e1.printStackTrace(); } catch (SAXException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return list; } I have this function that receives a xml and copy it to the list. This is well implemented. What i want do to know, is to send a category (that i receive like an argument of the function) and receive only the food from that category. The server is ready to receive the category and to send the food from that category. What do i have to do to send the category and receive the correct xml?

    Read the article

  • How to read the 3D chart data with directX?

    - by MemoryLeak
    I am reading a open source project, and I found there is a function which read 3D data(let's say a character) from obj file, and draw it . the source code: List<Vertex3f> verts=new List<Vertex3f>(); List<Vertex3f> norms=new List<Vertex3f>(); Groups=new List<ToothGroup>(); //ArrayList ALf=new ArrayList();//faces always part of a group List<Face> faces=new List<Face>(); MemoryStream stream=new MemoryStream(buffer); using(StreamReader sr = new StreamReader(stream)){ String line; Vertex3f vertex; string[] items; string[] subitems; Face face; ToothGroup group=null; while((line = sr.ReadLine()) != null) { if(line.StartsWith("#")//comment || line.StartsWith("mtllib")//material library. We build our own. || line.StartsWith("usemtl")//use material || line.StartsWith("o")) {//object. There's only one object continue; } if(line.StartsWith("v ")) {//vertex items=line.Split(new char[] { ' ' }); vertex=new Vertex3f();//float[3]; if(flipHorizontally) { vertex.X=-Convert.ToSingle(items[1],CultureInfo.InvariantCulture); } else { vertex.X=Convert.ToSingle(items[1],CultureInfo.InvariantCulture); } vertex.Y=Convert.ToSingle(items[2],CultureInfo.InvariantCulture); vertex.Z=Convert.ToSingle(items[3],CultureInfo.InvariantCulture); verts.Add(vertex); continue; } And why it need to read the data manually in directX? As far as I know, in XDA programming, we just need to call a function a load the resource. Is this because it is in DirectX, there is no function to read resource? If yes, then how to prepare the 3D resource ? in XDA we just need to use other software draw the 3D picture and then export. but what should I do in DirectX?

    Read the article

  • Problems doing asynch operations in C# using Mutex.

    - by firoso
    I've tried this MANY ways, here is the current iteration. I think I've just implemented this all wrong. What I'm trying to accomplish is to treat this Asynch result in such a way that until it returns AND I finish with my add-thumbnail call, I will not request another call to imageProvider.BeginGetImage. To Clarify, my question is two-fold. Why does what I'm doing never seem to halt at my Mutex.WaitOne() call, and what is the proper way to handle this scenario? /// <summary> /// re-creates a list of thumbnails from a list of TreeElementViewModels (directories) /// </summary> /// <param name="list">the list of TreeElementViewModels to process</param> public void BeginLayout(List<AiTreeElementViewModel> list) { // *removed code for canceling and cleanup from previous calls* // Starts the processing of all folders in parallel. Task.Factory.StartNew(() => { thumbnailRequests = Parallel.ForEach<AiTreeElementViewModel>(list, options, ProcessFolder); }); } /// <summary> /// Processes a folder for all of it's image paths and loads them from disk. /// </summary> /// <param name="element">the tree element to process</param> private void ProcessFolder(AiTreeElementViewModel element) { try { var images = ImageCrawler.GetImagePaths(element.Path); AsyncCallback callback = AddThumbnail; foreach (var image in images) { Console.WriteLine("Attempting Enter"); synchMutex.WaitOne(); Console.WriteLine("Entered"); var result = imageProvider.BeginGetImage(callback, image); } } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } /// <summary> /// Adds a thumbnail to the Browser /// </summary> /// <param name="result">an async result used for retrieving state data from the load task.</param> private void AddThumbnail(IAsyncResult result) { lock (Thumbnails) { try { Stream image = imageProvider.EndGetImage(result); string filename = imageProvider.GetImageName(result); string imagePath = imageProvider.GetImagePath(result); var imageviewmodel = new AiImageThumbnailViewModel(image, filename, imagePath); thumbnailHash[imagePath] = imageviewmodel; HostInvoke(() => Thumbnails.Add(imageviewmodel)); UpdateChildZoom(); //synchMutex.ReleaseMutex(); Console.WriteLine("Exited"); } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } }

    Read the article

  • ASP MVC 2: Error with dropdownlist on POST

    - by wh0emPah
    Okay i'm new to asp mvc2 and i'm experiencing some problems with the htmlhelper called Html.dropdownlistfor(); I want to present the user a list of days in the week. And i want the selected item to be bound to my model. I have created this little class to generate a list of days + a short notation which i will use to store it in the database. public static class days { public static List<Day> getDayList() { List<Day> daylist = new List<Day>(); daylist.Add(new Day("Monday", "MO")); daylist.Add(new Day("Tuesday", "TU")); // I left the other days out return daylist; } public class Dag{ public string DayName{ get; set; } public string DayShortName { get; set; } public Dag(string name, string shortname) { this.DayName= name; this.DayShortName = shortname; } } } I really have now idea if this is the correct way to do it Then i putted this in my controller: SelectList _list = new SelectList(Days.getDayList(), "DayShortName", "DayName"); ViewData["days"] = _list; return View(""); I have this line in my model public string ChosenDay { get; set; } And this in my view to display the list: <div class="editor-field"> <%: Html.DropDownListFor(model => model.ChosenDay, ViewData["days"] as SelectList, "--choose Day--")%> </div> Now this all works perfect. On the first visit, But then when i'm doing a [HttpPost] Which looks like the following: [HttpPost] public ActionResult Registreer(EventRegistreerViewModel model) { // I removed some unrelated code here // The code below executes when modelstate.isvalid == false SelectList _list = new SelectList(Days.getDayList(), "DayShortName", "DayName"); ViewData["days"] = _list; return View(model); } Then i will have the following exception thrown: The ViewData item that has the key 'ChosenDay' is of type 'System.String' but must be of type 'IEnumerable<SelectListItem>'. This errors gets thrown at the line in my view where i display the dropdown list. I really have no idea how to solve this and tried several solutions i found online. but none of them really worked. Ty in advance!

    Read the article

  • Java JFrame method pack() problem

    - by Oliver
    Hi, I have a frame with 4 JPanels and 1 JScrollPane, the 4 panels are in border layout north, east, south, west and the scrollpane in the center. I have been trying to get the pack method for a frame fuctioning but when run you just get the title bar of the window. Any Ideas? Thank you in advance. JFrame conFrame; JPanel panel1; JPanel panel2; JPanel panel3; JPanel panel4; JScrollPane listPane; JList list; Object namesAr[]; ... ... ... namesAr= namesA.toArray(); list = new JList(namesAr); list.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); list.setLayoutOrientation(JList.HORIZONTAL_WRAP); list.setVisibleRowCount(-3); list.addListSelectionListener(this); listPane = new JScrollPane(list); panel1 = new JPanel(); panel2 = new JPanel(); panel3 = new JPanel(); panel4 = new JPanel(); conFrame.setLayout(new BorderLayout()); panel1.setPreferredSize(new Dimension(100, 100)); panel2.setPreferredSize(new Dimension(100, 100)); panel3.setPreferredSize(new Dimension(100, 100)); panel4.setPreferredSize(new Dimension(100, 100)); panel1.setBackground(Color.red); panel2.setBackground(Color.red); panel3.setBackground(Color.red); panel4.setBackground(Color.red); conFrame.pack(); conFrame.add(panel1, BorderLayout.NORTH); conFrame.add(panel2, BorderLayout.EAST); conFrame.add(panel3, BorderLayout.SOUTH); conFrame.add(panel4, BorderLayout.WEST); conFrame.add(listPane, BorderLayout.CENTER); conFrame.setVisible(true);

    Read the article

  • Problem with LINQ Generated class and IEnumerable to Excel

    - by mehmet6parmak
    Hi all, I wrote a method which exports values to excel file from an IEnumerable parameter. Method worked fine for my little test class and i was happy till i test my method with a LINQ class. Method: public static void IEnumerableToExcel<T>(IEnumerable<T> data, HttpResponse Response) { Response.Clear(); Response.ContentEncoding = System.Text.Encoding.Default; Response.Charset = "windows-1254"; // set MIME type to be Excel file. Response.ContentType = "application/vnd.ms-excel;charset=windows-1254"; // add a header to response to force download (specifying filename) Response.AddHeader("Content-Disposition", "attachment;filename=\"ResultFile.xls\""); Type typeOfT = typeof(T); List<string> result = new List<string>(); FieldInfo[] fields = typeOfT.GetFields(); foreach (FieldInfo info in fields) { result.Add(info.Name); } Response.Write(String.Join("\t", result.ToArray()) + "\n"); foreach (T t in data) { result.Clear(); foreach (FieldInfo f in fields) result.Add((typeOfT).GetField(f.Name).GetValue(t).ToString()); Response.Write(String.Join("\t", result.ToArray()) + "\n"); } Response.End(); } My Little Test Class: public class Mehmet { public string FirstName; public string LastName; } Two Usage: Success: Mehmet newMehmet = new Mehmet(); newMehmet.FirstName = "Mehmet"; newMehmet.LastName = "Altiparmak"; List<Mehmet> list = new List<Mehmet>(); list.Add(newMehmet); DeveloperUtility.Export.IEnumerableToExcel<Mehmet>(list, Response); Fail: ExtranetDataContext db = new ExtranetDataContext(); DeveloperUtility.Export.IEnumerableToExcel<User>(db.Users, Response); Fail Reason: When the code reaches the code block used to get the fields of the template(T), it can not find any field for the LINQ Generated User class. For my weird class it works fine. Why? Thanks...

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >