Search Results

Search found 1781 results on 72 pages for 'cluster'.

Page 54/72 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-12

    - by Bob Rhubart
    2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration event.on24.com Date: Tuesday, April 24, 2012 Time: 10:00 AM PT / 1:00 PM ET Speakers: Mala Narasimharajan - Senior Product Marketing Manager, Oracle Data Integration, Oracle Bruce Tierney - Director of Product Marketing, Oracle SOA Suite, Oracle The Increasing Focus on Architecture (ArchBeat) blogs.oracle.com As a "third wave" of computing, Cloud computing is changing how IT organizations and individuals within those organizations approach the creation of solutions. Updated SOA Documents now available in ITSO Reference Library blogs.oracle.com Nine updated documents have just been added to the IT Strategies from Oracle library, including SOA Practitioner Guides, SOA Reference Architectures, and SOA White Papers and Data Sheets. Access to all documents within the ITSO library is free to those with a free Oracle.com membership. WebLogic JMS Clustering and Spring | Rene van Wijk middlewaremagic.com Oracle ACE Rene van Wijk sets up a WebLogic cluster that includes a JMS environment, which will be used by Spring. Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri blogs.oracle.com Shub Lahiri shows how the pre-installed simulator that comes with the SOA Suite for Healthcare Integration pack can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. In the cloud era, let's start calling IT what it is: 'Innovation Team' | Joe McKendrick www.zdnet.com Cloud, the third great shift in 50 years of computing, presents a golden opportunity for IT to get out in front and lead. Thought for the Day "Why do we never have time to do it right, but always have time to do it over?" — Anonymous

    Read the article

  • Would You Swim Laps in Lake Baikal?

    - by rickramsey
    source This is the lake where Yuli Vasiliev's countrymen swim laps. Yuli is one of my favorite OTN writers not just because he really knows his stuff. Not just because his writing is clear and accurate. And not just because his English is better than the English of most native speakers. Yo, those are all good reasons. But it's the Lake Baikal thing. Yuli recently wrote two wicked good how-to's about Oracle VM Templates. You should read them. You might gain a gram of Yuli's respect. Two grams, if you can head butt icebergs while you swim. How to Use Oracle VM Templates How to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. Also how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. How to Use Oracle VM VirtualBox Templates How to use Oracle VM VirtualBox Templates in Oracle VM VirtualBox. Similar to the article above, but it describes how to download, install, and configure the templates within Oracle VM VirtualBox, instead of on bare metal. Other OTN Technical Articles by Yuli Vasiliev Retrieving, Transforming, and Consolidating Web Data with Oracle Database Setting Up, Configuring, and Using a WebLogic Server Cluster Cube Development for Beginners How to XQuery Non-JDBC Sources from JDBC Advanced Dimensional Design with Oracle Warehouse Builder Using the JDBC Connectivity Layer in Oracle Warehouse Builder High Performance Oracle JDBC Programming Python Data Persistence with Oracle Querying JPA Entities with JPQL and Native SQL - Rick Website Newsletter Facebook Twitter

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • Ubuntu Slow - What architecture does the Windows Installer install?

    - by Benjamin Yep
    I feel absolutely limited by using Windows, and I need to switch to a Unix environment. I once installed Red Hat on my lappie (screen + external monitor setup; 4GB ram; x64; runs fast) and it worked fine, but I saw that the computer cluster that is the birthplace of my unix knowledge switched to Ubuntu, so naturally I follow. To the point. When I installed Ubuntu onto my machine via the Windows Installer, it ran quite slow. Opening Firefox takes about 8-9 seconds, it freezes up often, unable to handle its own background processes. I saw in a thread that, perhaps, it is running slow because the Windows Installer is installing an x64 version. Of course, my computer has had no performance issues in the past(except that time with the trojans but you know, know one is perfect ;) ) Anyways, I uninstalled Ubuntu, freeing up the max allocated memory it took up, and continue to be sad, trapped in my MS world with only a buggy Cygwin, any assistance is greatly appreciated! :) Thanks ~Ben

    Read the article

  • "Unable to install GRUB in /dev/sda" when installing GRUB

    - by vicban3d
    I recently bought a shiny new Lenovo Yoga 2 Pro and I want to dual boot it with Ubuntu for studying purposes. Its built-in OS is Windows 8.1 and it has a 256GB SSD. I've made a separate 90GB partition just for Ubuntu and a live USB to install it. The first time everything seemed to work great, I solved the wifi issued by blacklisting ideapad_laptop, the installation went flawlessly and Ubuntu worked fine. When I got up the next morning and turned on my laptop it booted into Windows right away without ever showing the GRUB menu. So I tried to reset, and checked my partitions with the Disk Manager and everything looked fine. Since I couldn't find a solution online I went ahead and formatted the partition to try and install again. This time and every time since, the installation was aborted and I got a fatal error saying: Unable to install GRUB in /dev/sda Executing `grub-install /dev/sda` failed. This is a fatal error. Can anyone please suggest a solution to this problem? If any further information is needed I would be happy to provide it. Thanks. When installing I get the following in details: ubuntu kernel: [ 1946.372741] FAT-fs (sda2): error, fat_get_cluster: invalid cluster chain (i_pos 0). ubuntu grub-installer: error: Running 'grub-install --force failed.

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • How should I create a mutable, varied jtree with arbitrary/generic category nodes?

    - by Pureferret
    Please note: I don't want coding help here, I'm on Programmers for a reason. I want to improve my program planning/writing skills not (just) my understanding of Java. I'm trying to figure out how to make a tree which has an arbitrary category system, based on the skills listed for this LARP game here. My previous attempt had a bool for whether a skill was also a category. Trying to code around that was messy. Drawing out my tree I noticed that only my 'leaves' were skills and I'd labeled the others as categories. Explanation of tree: The Tree is 'born' with a set of hard coded highest level categories (Weapons, Physical and Mental, Medical etc.). Fro mthis the user needs to be able to add a skill. Ultimately they want to add 'One-handed Sword Specialisation' for instance. To do so you'd ideally click 'add' with Weapons selected and then select One-handed from a combobox, then click add again and enter a name in a text field. Then click add again to add a 'level' or 'tier' first proficiency, then specialisation. Of course if you want to buy a different skill it's completely different, which is what I'm having trouble getting my head around let alone programming in. What is a good system for describing this sort of tree in code? All the other JTree examples I've seen have some predictable pattern, and I don't want to have to code this all in 'literals'. Should I be using abstract classes? Interfaces? How can I make this sort of cluster of objects extensible when I add in other skills not listed above that behave differently? If there is not a good system to use, if there a good process for working out how to do this sort of thing?

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • eSTEP Newsletter November 2011 now available

    - by uwes
    Dear Partners,We would like to inform you that the November issue of our Newsletter is now available.The issue contains informations to the following topics:Notes from Corporate: Magic Quadrant for Enterprise Application Servers, Oracle Buys RightNow Technical Corner: Oracle Solaris 11 – The First Cloud OS, Oracle Solaris 10 8/11 now available, New RAC/Containers certifications, DTrace and Container for Oracle Linux, Oracle Enterprise Manager Ops Center released, News from the Oracle Solaris Cluster, SPARC - New roadmap, T-Series Benchmarks Learning & Events: eSTEP Events Schedule, Recently Delivered TechCasts, Delivered Campaigns in 2011 How to ...: About Oracle Solaris Containers, Detailed feature comparison between the different versions of database 11g, Upgrade Advantage Program + table with examples, Sun Software Name ===> New Oracle Name, Oracle Linux and OVM Certification Search, TO YOUR ATTENTION - Repricing Servers and Xoptions You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • Extended service set help [closed]

    - by Cygnus X
    Sorry, this is going to be long and rambly. So, at home my network is set up as such: I have a "master router" that handles DHCP, from there I have a basic 5 port switch, and coming off that, I have a second router that I put in access point mode so that I could have better wireless coverage in my house. I cant seem to access it through a web browser since the master router kicked in and assigned it an ip address. But, i told you all that story, so that i could tell you this story. At the company I work for, they are trying to set up wireless cameras, according to the manual for these, in order to connect these cameras to our network, we need to get into the wireless router and associate the router with the cameras, but the problem is, the network is setup like my home network. Theres a "master router" which is a windows 2003 server, and then a wireless router in AP mode dishing out wireless access. I cant get into the router (by the way I'm not the system admin for the company, nor do i know jack squat about windows server , nor can I easily get in contact with the system admin. (quite the cluster f*k, eh?)) I've tried using wireshark to find the routers ip, dug through arp tables, tried plugging straight into the router, but I cant seem to get into the damn thing. Any ideas/suggestions would be greatly appreciated.

    Read the article

  • Oracle????????(2013?11?)

    - by Steve He(???)
      Oracle Support Training Oracle ???????????,????????????,??????,?????Oracle??????????,????????????????????????????????Oracle???????????? ???? ?? ?? ???? ?? My Oracle Support ?? ???????????????My Oracle Support????????????????,?????,?????????,??My Oracle Support??,?????????,????????,?????????? 11?7?14:00 ??  ??RACCheck?????RAC?? ???1?????????,????RACCheck????????????,????RACCheck ???????????,??RACCheck ????“11.2.0.3 Upgrade Readiness Feature”? ??,???????????,???????RACCheck ??? 11?13?15:00 ?? ????? ???????,??25??????5???Q&A??,????????????,???????,???????????,????????????????????? 11?14?14:00 ?? ??????????? ????????????????Oracle???????,???????????????????????Oracle????????SR??????? 30??????????My Oracle Support???????,???????????????????,?????????????????????,?????????????????????,?????Oracle????????????????????????????????? 11?21?14:00 ?? ????Enterprise Manager 12c?????Exadata ????????1??, ???Exadata????????Enterprise Manager Cloud Control???Exadata?????, ??Storage Cells, Network Switches, Computer nodes, and Cluster Database Services, ?????????????. 11?21?15:00 ?? Oracle?????? ??????????Oracle???????,???Oracle???????? ?30?????????????,??,?????????,????,???????????? 11?28?14:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.????????,?? Support Training Community ??????????

    Read the article

  • ????????????????????WebLogic Server 12c????

    - by ???02
    ??????Java EE 6???????????WebLogic Server 12c????????WebLogic Server 12c???/?????????????????????????????WebLogic Server 12c????????·??????·?????????????????????????/??????????????????????????????????(???) Web????????????????????Web?????·??????? WebLogic Server 12c???????????????????????????? Fusion Middleware?????? ????????? Cloud Application Foundation???????? ???????????????????? 24??365??????????????????IT?????????????????????????????????????????????????????????Oracle Real Application Cluster(RAC)???Oracle RAC?????????Oracle Database???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??Oracle RAC???????????????WebLogic Server????????????Active GridLink(GridLink??????)???Active GridLink?????????????????????????????WebLogic Server????????????Oracle RAC????????????Active GridLink ?Oracle RAC?????????????????????????????????????·?????????????XA?????????RAC??????·???????????????????? ??Active GridLink??WebLogic Server 12c?????????????Web??????RAC??????·??????(Web?????·??????)???? Oracle RAC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????·?????????????? ???Web?????·?????????????????????????????HTTP?????????????????????????RAC???????????????????????????????????????????????????Oracle RAC??????????????????????????????????????????? ??Web?????·??????????????????????????????????????????????????????????????????????????????????????????????????????Web??????????????????????????????????????????????????????????Web?????·???????????????????????????? ????????·???????????????JDBC TLOG????????????? ???????WebLogic Server 12c?????????????????????????·??(TLOG)?Oracle Database??????JDBC TLOG???????? ????????????·???????????????????????????????????????????????2????·??????????????????????????????·?????????????????????????????????(????1)???????????·????????????????????????(????2)???????1??????????????????????????????????????????????????????????????????????????????TLOG??????????????????????????????????????????????? ???WebLogic Server?????TLOG?????·???????????????12c??Oracle Database?????????????????????????????????????·????JMS?????????TLOG????????????·????????????????????????????????????????? ??????????????????????????WebLogic Server???????????????????????????????????????????????TLOG?????·??????????????????????????????TLOG?????????????????????·??????????????????????????????????????????????????????????????????????????????????? Oracle Database???????????????????????????????????Oracle Data Guard?????????????????·????TLOG??????????????????????????????????????????????????????????????? ?????????·????????TLOG??????????·?????????????????????????????????????????????????????Oracle Data Guard?TLOG?????????????????????????????????????????????????????????TLOG??????????????????Oracle Data Guard?????????????? HTTP???????????????RESTful Management Services? ???????????????1????????????RESTful Management Services??????HTTP??????WebLogic Server 12c???????????????????????????????HTML?JSON?XML???????????? ????????TCP?80?????????????????????????????????????????????????PC???????????WebLogic Server????????????????????????????????????????????????????????????HTTP????????????????????????????????????????? ?????????????2??????WebLogic Server 12c??????????????????WebLogic Server 12c??Java EE 6/Java SE 7???????????????????????????????????????????????????????????????????

    Read the article

  • ????: Oracle ASM???????????·?????????

    - by Kumiko Fujita
    ??????????????????????????????????????????????????????????????????????????????????????????????????? * * * ??????????(?)????????????????????????Oracle Database?????????????????????????????????????????????????????????·????????????????ASM(Automatic Storage Management)???????????????????? ???????????????MD(??????????)???????MD???????????????????????????·????????·???????????????????????????????????????24??/365??????????????? ??????????????? ?????????????????????????????RAC????????????????????ASM?????????? OS:UNIX Database:Oracle Database 11g Enterprise Edition 11.2.0.1 Real Application Clusters(2???RAC)?Partitioning Option ????????2.5TB???SAN????????ASM????????????????? ????????? ???? ???????? ??? ?? DG01 NORMAL 4MB 5GB OCR?????? DG02 ????? 4MB 60GB ??????REDO???spfile DG03 ????? 4MB 60GB ??????REDO?? DG04 ????? 4MB 178GB ????? DG05 ????? 4MB 2670GB ?????? ???????RAID 0+1 ????????????????????????????????????????? (ASM?) ????????????????????? http://otndnld.oracle.co.jp/products/database/oracle10g/availability/pdf/asm_best_practices0907-fujitsu_jp.pdf ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????REDO????????????RAID 0+1?????????????????????2??????????????????????????????????????????????????????????????????????????????????????????????SAN????????????????????????????????????????????????????????????????????????OS???????????????????????????????2???????????????????????????????? ASM?????????????????????????????????????????????????????RAW???????????????????????????????????????????????????????????????????????????????????????DB???????????????????????????????????????????????????????????????????????????????????????????????? ASM??????????????????????????????????????????????????????????????????????????????????????????????????????DB???????DBA????????????ASM????????????????????? ??????????ASM??????????????????????????????????????????????????????????????RAW??????????????????????·?????????????????????????????????????????????????????????? Oracle Database 11gR2????ASM?Clusterware?GRID Infrastructure???????????????????????OCR(Oracle Cluster Registry)?ASM??????????????????????????????????????????????ASM??????????DBA???????????·??????????????????????????????

    Read the article

  • ?Oracle DB 11gR2 ??????????????????/????????????????!

    - by Yuichi.Hayashi
    ?????????????????????????????40~60%????????????? ??????????????????????????????????????????????????????????????????????????????????... ????????????????????????????????????????TCO(Total Cost of Ownership)???????????? ??????????????????????????????????????????????????????????????????????????? ???????1?1???????????????????????????????????1??????????????????????????????????????·????????????? ??????????????????·???·????????????????TCO????????????? ????????????????Grid(????)????????????????????????????·???? = Oracle Real Application Clusters(RAC)???????·???? = Oracle Automatic Storage Management(ASM)????????????????????????????????????? Oracle Database???????11g R2?????????????????????????/???????????????????????(????????????????)?????????????????????????????! SCAN Single Client Access Name(SCAN)??Oracle Real Application Clusters(RAC)11g R2??????? SCAN??????????????????????????????????????????????????RAC?????????????????????????·????????????????????SCAN?????????????????????VIP?????????????RAC????????????????????????????????!???????????????? ???????????????????????????????????????SCAN?????????????????????????????TCO?????????????????(????????????)???????????????????????????????!????????????? SCAN?????????????????????? ??????Oracle Database 11gR2 Real Application Clusters(?????????) ??????Oracle Real Application Clusters 11g Release 2 SCAN??? ACFS ASM Cluster File System(ACFS)??Automatic Storage Management(ASM)11g R2??????? ASM??S.A.M.E.(Stripe And Mirror Everything)????????????????????????????????????????????????????????·???????·??????????????????10g????????????????·??????????·???????????????ASM????????????·??????????????????????????????????????????·?????????????????????????????????????????????? 11g R2??????ACFS?????????????????(????????????????????????????????????????????????????????)????ASM???????????????????????????????????????????????????????·??????????????ACFS????! · ??????????????????? · ????????????/???? · ??????????????????????(?????)??? · ????????????? ?2???????????????????????????? · ???????????? ??????????????????????·??????????·?????????????????????!???????????????????????????????????????? ACFS??????????????????? ??????Oracle Database 11gR2 Automatic Storage Management ??????Oracle Database 11g Release 2 Automatic Storage Management???????????????? ??????·????? ??????·???????Oracle Database Resource Manager(????·?????)11g R2??????? ????·????????????·???????????????????????????????Oracle Database???Oracle RAC????????????????????????????????????????????????????????????????????????????????????????????????????1????????????? ????????????·?????????????????????????????????????????????????????????????????? CPU ???????????????????????????????????????????????????·??????????????????????????? 11g R2??????????·???????????????? CPU_COUNT ?????????????? CPU ???????????????????????????·??????? CPU ?????????????????????????????????????????????? ????????·???????????????????????????????????????????Oracle ??????????????????????·??????? CPU ??????????????????????????????·???????????????!???????????????? ??????·???????????????????????? ??????Oracle Database 11gR2 ????????????? ?????????? ? ??????????????????????????????????!? ? ???????????????????????????????????!?

    Read the article

  • android Emulator always stop at "waiting for Home..."

    - by wuwupp
    hi,there, I freshed install Eclipse, jdk, android sdk 1.5 in winxp. but when I run the "hello world" app, the emulator always stop at "andorid" loading message. In eclipse console, it shows "waiting for HOME..." and in DDMS LogCat, it shows following msg: there are some error and warning. So, what's wrong with my case? I have googled lots of results, but no one can help me. Please help me. Many thx 06-13 00:07:54.323: INFO/DEBUG(551): debuggerd: Jun 30 2009 17:00:51 06-13 00:07:54.383: INFO/vold(550): Android Volume Daemon version 2.0 06-13 00:07:54.724: ERROR/flash_image(556): can't find recovery partition 06-13 00:07:55.223: DEBUG/qemud(558): entering main loop 06-13 00:07:55.323: DEBUG/qemud(558): multiplexer_handle_control: unknown control message (18 bytes): 'ko:unknown command' 06-13 00:07:55.493: INFO/vold(550): New MMC card 'SU02G' (serial 1012966) added @ /devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118 06-13 00:07:55.773: INFO/vold(550): Disk (blkdev 179:0), 262144 secs (128 MB) 0 partitions 06-13 00:07:55.773: INFO/vold(550): New blkdev 179.0 on media SU02G, media path /devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118, Dpp 0 06-13 00:07:55.814: INFO/vold(550): Evaluating dev '/devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118/block/mmcblk0' for mountable filesystems for '/sdcard' 06-13 00:07:56.014: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 06-13 00:07:56.014: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test2' (m) 06-13 00:07:56.073: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 06-13 00:07:56.073: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test' (m) 06-13 00:07:56.073: DEBUG/vold(550): Bootstrapping complete 06-13 00:07:56.743: INFO//system/bin/dosfsck(550): dosfsck 3.0.1 (23 Nov 2008) 06-13 00:07:56.753: INFO//system/bin/dosfsck(550): dosfsck 3.0.1, 23 Nov 2008, FAT32, LFN 06-13 00:07:56.783: INFO//system/bin/dosfsck(550): Checking we can access the last sector of the filesystem 06-13 00:07:56.893: INFO//system/bin/dosfsck(550): Boot sector contents: 06-13 00:07:56.924: INFO//system/bin/dosfsck(550): System ID "MSWIN4.1" 06-13 00:07:56.934: INFO//system/bin/dosfsck(550): Media byte 0xf8 (hard disk) 06-13 00:07:56.953: INFO//system/bin/dosfsck(550): 512 bytes per logical sector 06-13 00:07:56.974: INFO//system/bin/dosfsck(550): 512 bytes per cluster 06-13 00:07:57.005: INFO//system/bin/dosfsck(550): 32 reserved sectors 06-13 00:07:57.013: INFO//system/bin/dosfsck(550): First FAT starts at byte 16384 (sector 32) 06-13 00:07:57.013: INFO//system/bin/dosfsck(550): 2 FATs, 32 bit entries 06-13 00:07:57.023: INFO//system/bin/dosfsck(550): 1040384 bytes per FAT (= 2032 sectors) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): Root directory start at cluster 2 (arbitrary size) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): Data area starts at byte 2097152 (sector 4096) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): 258048 data clusters (132120576 bytes) 06-13 00:07:57.103: INFO//system/bin/dosfsck(550): 9 sectors/track, 2 heads 06-13 00:07:57.103: INFO//system/bin/dosfsck(550): 0 hidden sectors 06-13 00:07:57.123: INFO//system/bin/dosfsck(550): 262144 sectors total 06-13 00:07:57.313: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:07:57.313: DEBUG/qemud(558): created client 0xe078 listening on fd 8 06-13 00:07:57.313: DEBUG/qemud(558): fdhandler_event: disconnect on fd 8 06-13 00:07:57.623: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:07:57.623: DEBUG/qemud(558): created client 0xf028 listening on fd 8 06-13 00:07:57.643: DEBUG/qemud(558): client_fd_receive: attempting registration for service 'gsm' 06-13 00:07:57.763: DEBUG/qemud(558): client_fd_receive: - received channel id 1 06-13 00:08:12.553: INFO//system/bin/dosfsck(550): Checking for unused clusters. 06-13 00:08:13.483: INFO//system/bin/dosfsck(550): Checking free cluster summary. 06-13 00:08:13.643: DEBUG/AndroidRuntime(553): AndroidRuntime START <<<<<<<<<<<<<< 06-13 00:08:13.705: DEBUG/AndroidRuntime(553): CheckJNI is ON 06-13 00:08:13.793: INFO//system/bin/dosfsck(550): /dev/block//vold/179:0: 0 files, 1/258048 clusters 06-13 00:08:14.063: INFO/logwrapper(550): /system/bin/dosfsck terminated by exit(0) 06-13 00:08:14.143: DEBUG/vold(550): Filesystem check completed OK 06-13 00:08:14.683: INFO/vold(550): Sucessfully mounted vfat filesystem 179:0 on /sdcard (safe-mode on) 06-13 00:08:17.023: INFO/(554): ServiceManager: 0xac38 06-13 00:08:17.883: INFO/AudioFlinger(554): AudioFlinger's thread ready to run for output 0 06-13 00:08:18.163: INFO/CameraService(554): CameraService started: pid=554 06-13 00:08:21.824: DEBUG/AndroidRuntime(553): --- registering native functions --- 06-13 00:08:27.813: INFO/Zygote(553): Preloading classes... 06-13 00:08:27.994: DEBUG/dalvikvm(553): GC freed 764 objects / 42216 bytes in 88ms 06-13 00:08:30.234: DEBUG/dalvikvm(553): GC freed 278 objects / 17160 bytes in 48ms 06-13 00:08:33.094: DEBUG/dalvikvm(553): GC freed 208 objects / 12696 bytes in 44ms 06-13 00:08:34.343: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.803: DEBUG/dalvikvm(553): Added shared lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.903: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.903: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.003: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:36.003: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.215: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:36.244: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.455: DEBUG/dalvikvm(553): GC freed 462 objects / 29144 bytes in 70ms 06-13 00:08:44.123: DEBUG/dalvikvm(553): GC freed 3584 objects / 171648 bytes in 125ms 06-13 00:09:10.473: DEBUG/dalvikvm(553): GC freed 11329 objects / 400856 bytes in 196ms 06-13 00:09:17.373: DEBUG/dalvikvm(553): GC freed 10472 objects / 438272 bytes in 199ms 06-13 00:09:24.563: DEBUG/dalvikvm(553): GC freed 10975 objects / 459800 bytes in 202ms 06-13 00:09:46.403: DEBUG/dalvikvm(553): GC freed 14372 objects / 506896 bytes in 252ms 06-13 00:09:53.793: DEBUG/dalvikvm(553): GC freed 11314 objects / 481360 bytes in 215ms 06-13 00:09:57.743: DEBUG/dalvikvm(553): GC freed 5928 objects / 248640 bytes in 195ms 06-13 00:10:01.324: DEBUG/dalvikvm(553): GC freed 349 objects / 37032 bytes in 190ms 06-13 00:10:05.253: DEBUG/dalvikvm(553): GC freed 778 objects / 48376 bytes in 217ms 06-13 00:10:06.564: DEBUG/dalvikvm(553): GC freed 321 objects / 37288 bytes in 219ms 06-13 00:10:08.194: DEBUG/dalvikvm(553): GC freed 477 objects / 29584 bytes in 212ms 06-13 00:10:08.663: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libwebcore.so 0x0 06-13 00:10:09.743: DEBUG/dalvikvm(553): Added shared lib /system/lib/libwebcore.so 0x0 06-13 00:10:11.634: DEBUG/dalvikvm(553): GC freed 441 objects / 26224 bytes in 236ms 06-13 00:10:12.893: DEBUG/dalvikvm(553): GC freed 506 objects / 41464 bytes in 235ms 06-13 00:10:14.153: DEBUG/dalvikvm(553): GC freed 537 objects / 38832 bytes in 239ms 06-13 00:10:15.883: DEBUG/dalvikvm(553): GC freed 342 objects / 22552 bytes in 248ms 06-13 00:10:17.124: DEBUG/dalvikvm(553): GC freed 338 objects / 18736 bytes in 264ms 06-13 00:10:18.523: DEBUG/dalvikvm(553): GC freed 629 objects / 32136 bytes in 260ms 06-13 00:10:38.933: DEBUG/dalvikvm(553): GC freed 14257 objects / 497280 bytes in 368ms 06-13 00:10:46.453: DEBUG/dalvikvm(553): GC freed 11164 objects / 469576 bytes in 360ms 06-13 00:10:52.973: DEBUG/dalvikvm(553): GC freed 7134 objects / 311432 bytes in 339ms 06-13 00:10:55.595: DEBUG/dalvikvm(553): GC freed 752 objects / 43224 bytes in 520ms 06-13 00:10:56.863: DEBUG/dalvikvm(553): GC freed 598 objects / 31496 bytes in 307ms 06-13 00:10:58.543: DEBUG/dalvikvm(553): GC freed 413 objects / 26336 bytes in 355ms 06-13 00:10:59.263: INFO/Zygote(553): ...preloaded 1166 classes in 151403ms. 06-13 00:10:59.683: DEBUG/dalvikvm(553): GC freed 313 objects / 19952 bytes in 343ms 06-13 00:10:59.793: INFO/Zygote(553): Preloading resources... 06-13 00:11:00.683: DEBUG/dalvikvm(553): GC freed 54 objects / 11248 bytes in 340ms 06-13 00:11:05.723: DEBUG/dalvikvm(553): GC freed 337 objects / 15008 bytes in 317ms 06-13 00:11:08.703: DEBUG/dalvikvm(553): GC freed 280 objects / 11768 bytes in 312ms 06-13 00:11:09.303: INFO/Zygote(553): ...preloaded 48 resources in 9513ms. 06-13 00:11:09.795: INFO/Zygote(553): ...preloaded 15 resources in 454ms. 06-13 00:11:10.303: DEBUG/dalvikvm(553): GC freed 118 objects / 8616 bytes in 420ms 06-13 00:11:10.913: DEBUG/dalvikvm(553): GC freed 205 objects / 8104 bytes in 308ms 06-13 00:11:11.344: DEBUG/dalvikvm(553): GC freed 36 objects / 1400 bytes in 320ms 06-13 00:11:11.543: INFO/dalvikvm(553): Splitting out new zygote heap 06-13 00:11:12.973: INFO/dalvikvm(553): System server process 585 has been created 06-13 00:11:13.336: INFO/Zygote(553): Accepting command socket connections 06-13 00:11:14.963: INFO/jdwp(585): received file descriptor 10 from ADB 06-13 00:11:16.843: WARN/System.err(585): Can't dispatch DDM chunk 46454154: no handler defined 06-13 00:11:16.953: WARN/System.err(585): Can't dispatch DDM chunk 4d505251: no handler defined 06-13 00:11:17.763: DEBUG/dalvikvm(585): Trying to load lib /system/lib/libandroid_servers.so 0x0 06-13 00:11:19.714: DEBUG/dalvikvm(585): Added shared lib /system/lib/libandroid_servers.so 0x0 06-13 00:11:20.123: INFO/sysproc(585): Entered system_init() 06-13 00:11:20.223: INFO/sysproc(585): ServiceManager: 0x1017b8 06-13 00:11:20.359: INFO/SurfaceFlinger(585): SurfaceFlinger is starting 06-13 00:11:20.493: INFO/SurfaceFlinger(585): SurfaceFlinger's main thread ready to run. Initializing graphics H/W... 06-13 00:11:20.634: ERROR/MemoryHeapBase(585): error opening /dev/pmem: No such file or directory 06-13 00:11:20.704: ERROR/SurfaceFlinger(585): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 06-13 00:11:22.013: ERROR/GLLogger(585): couldn't load library (Cannot find library) 06-13 00:11:22.103: INFO/SurfaceFlinger(585): EGL informations: 06-13 00:11:22.113: INFO/SurfaceFlinger(585): # of configs : 6 06-13 00:11:22.123: INFO/SurfaceFlinger(585): vendor : Android 06-13 00:11:22.123: INFO/SurfaceFlinger(585): version : 1.31 Android META-EGL 06-13 00:11:22.134: INFO/SurfaceFlinger(585): extensions: 06-13 00:11:22.134: INFO/SurfaceFlinger(585): Client API: OpenGL ES 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): using (fd=22) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): id = 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): xres = 320 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): yres = 480 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): xres_virtual = 320 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): yres_virtual = 960 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): bpp = 16 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): r = 11:5 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): g = 5:6 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): b = 0:5 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): width = 49 mm (165.877548 dpi) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): height = 74 mm (164.756760 dpi) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): refresh rate = 60.00 Hz 06-13 00:11:22.533: WARN/HAL(585): load: module=/system/lib/hw/copybit.goldfish.so error=Cannot find library 06-13 00:11:22.543: WARN/HAL(585): load: module=/system/lib/hw/copybit.default.so error=Cannot find library 06-13 00:11:22.553: WARN/SurfaceFlinger(585): ro.sf.lcd_density not defined, using 160 dpi by default. 06-13 00:11:22.644: INFO/SurfaceFlinger(585): OpenGL informations: 06-13 00:11:22.654: INFO/SurfaceFlinger(585): vendor : Android 06-13 00:11:22.654: INFO/SurfaceFlinger(585): renderer : Android PixelFlinger 1.0 06-13 00:11:22.654: INFO/SurfaceFlinger(585): version : OpenGL ES-CM 1.0 06-13 00:11:22.654: INFO/SurfaceFlinger(585): extensions: GL_OES_byte_coordinates GL_OES_fixed_point GL_OES_single_precision GL_OES_read_format GL_OES_compressed_paletted_texture GL_OES_draw_texture GL_OES_matrix_get GL_OES_query_matrix GL_ARB_texture_compression GL_ARB_texture_non_power_of_two GL_ANDROID_direct_texture GL_ANDROID_user_clip_plane GL_ANDROID_vertex_buffer_object GL_ANDROID_generate_mipmap 06-13 00:11:22.673: WARN/HAL(585): load: module=/system/lib/hw/copybit.goldfish.so error=Cannot find library 06-13 00:11:22.683: WARN/HAL(585): load: module=/system/lib/hw/copybit.default.so error=Cannot find library 06-13 00:11:22.703: WARN/HAL(585): load: module=/system/lib/hw/overlay.goldfish.so error=Cannot find library 06-13 00:11:22.713: WARN/HAL(585): load: module=/system/lib/hw/overlay.default.so error=Cannot find library 06-13 00:11:23.663: INFO/sysproc(585): System server: starting Android runtime. 06-13 00:11:23.733: INFO/sysproc(585): System server: starting Android services. 06-13 00:11:23.953: INFO/SystemServer(585): Entered the Android system server! 06-13 00:11:24.303: INFO/sysproc(585): System server: entering thread pool. 06-13 00:11:24.763: ERROR/GLLogger(585): couldn't load library (Cannot find library) 06-13 00:11:25.893: INFO/ARMAssembler(585): generated scanline__00000077:03545404_00000A01_00000000 [ 30 ipp] (51 ins) at [0x18f708:0x18f7d4] in 72796961 ns 06-13 00:11:26.193: INFO/SystemServer(585): Starting Power Manager. 06-13 00:11:26.953: INFO/SystemServer(585): Starting Activity Manager. 06-13 00:11:31.733: INFO/SystemServer(585): Starting telephony registry 06-13 00:11:32.054: INFO/SystemServer(585): Starting Package Manager. 06-13 00:11:32.553: INFO/Installer(585): connecting... 06-13 00:11:32.914: INFO/installd(555): new connection 06-13 00:11:35.193: INFO/PackageManager(585): Got library android.awt in /system/framework/android.awt.jar 06-13 00:11:35.313: INFO/PackageManager(585): Got library android.test.runner in /system/framework/android.test.runner.jar 06-13 00:11:35.324: INFO/PackageManager(585): Got library com.android.im.plugin in /system/framework/com.android.im.plugin.jar 06-13 00:11:44.643: DEBUG/PackageManager(585): Scanning app dir /system/framework 06-13 00:11:49.513: DEBUG/PackageManager(585): Scanning app dir /system/app 06-13 00:11:51.493: DEBUG/dalvikvm(585): GC freed 6088 objects / 251280 bytes in 1237ms 06-13 00:12:27.497: DEBUG/dalvikvm(585): GC freed 3435 objects / 216088 bytes in 792ms 06-13 00:12:29.213: DEBUG/PackageManager(585): Scanning app dir /data/app 06-13 00:12:30.223: DEBUG/PackageManager(585): Scanning app dir /data/app-private 06-13 00:12:30.425: INFO/PackageManager(585): Time to scan packages: 47.319 seconds 06-13 00:12:30.703: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.providers.contacts 06-13 00:12:30.803: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.cp in package com.android.providers.contacts 06-13 00:12:30.853: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.development 06-13 00:12:30.913: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.ALL_SERVICES in package com.android.development 06-13 00:12:31.133: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.YouTubeUser in package com.android.development 06-13 00:12:31.143: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD in package com.android.development 06-13 00:12:31.234: WARN/PackageManager(585): Unknown permission com.google.android.providers.gmail.permission.WRITE_GMAIL in package com.android.settings 06-13 00:12:31.254: WARN/PackageManager(585): Unknown permission com.google.android.providers.gmail.permission.READ_GMAIL in package com.android.settings 06-13 00:12:31.303: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.settings 06-13 00:12:31.683: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.browser 06-13 00:12:31.803: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.mail in package com.android.contacts 06-13 00:12:34.603: DEBUG/dalvikvm(585): GC freed 2851 objects / 161304 bytes in 845ms 06-13 00:12:35.403: INFO/SystemServer(585): Starting Content Manager. 06-13 00:12:39.954: WARN/ActivityManager(585): Unable to start service Intent { action=android.accounts.IAccountsService comp={com.google.android.googleapps/com.google.android.googleapps.GoogleLoginService} }: not found 06-13 00:12:40.063: WARN/AccountMonitor(585): Couldn't connect to Intent { action=android.accounts.IAccountsService comp={com.google.android.googleapps/com.google.android.googleapps.GoogleLoginService} } (Missing service?) 06-13 00:12:40.253: INFO/SystemServer(585): Starting System Content Providers. 06-13 00:12:40.553: INFO/ActivityThread(585): Publishing provider settings: com.android.providers.settings.SettingsProvider 06-13 00:12:41.433: INFO/ActivityThread(585): Publishing provider sync: android.content.SyncProvider 06-13 00:12:41.683: INFO/SystemServer(585): Starting Battery Service. 06-13 00:12:42.293: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/usb/online' 06-13 00:12:42.433: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_vol' 06-13 00:12:42.543: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_temp' 06-13 00:12:42.933: INFO/SystemServer(585): Starting Hardware Service. 06-13 00:12:43.398: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:12:43.623: DEBUG/qemud(558): created client 0x10fd8 listening on fd 11 06-13 00:12:43.743: DEBUG/qemud(558): client_fd_receive: attempting registration for service 'hw-control' 06-13 00:12:43.873: DEBUG/qemud(558): client_fd_receive: - received channel id 2 06-13 00:15:20.695: WARN/SurfaceFlinger(585): executeScheduledBroadcasts() skipped, contention on the client. We'll try again later...

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >