Search Results

Search found 4561 results on 183 pages for 'production'.

Page 40/183 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Disabling shutdown command for all users, even root - consequences?

    - by Rich
    I would like to disable the shutdown command for all users, even root, on an Ubuntu Server installation. The reason I want to do this is to ensure that I don't get into the habit of shutting down the machine in this way, as I SSH into a lot of production machines at the same time as this one, and I don't want to accidentally shutdown one of the other machines by typing the command into the wrong window. The server I want do disable shutdown on only runs inside VirtualBox on my Windows desktop, and I only use it for local testing so it is not a problem if I can't shut it down from the command line. I have already mitigated the problem a bit by ensuring I have a different password on the VirtualBox image, but obviously if I am within the sudo 'window' on one of the production machines, I could still accidentally shut it down. My questions are: How do I disable the shutdown command? If I do disable the shutdown command, are there any consequences that I should be made aware of? Most specifically, will it disable support for ACPI shutdown that is the equivalent of pressing the power button on a physical machine? Could it affect other generic applications? For information, I just use this VirtualBox image for trying out shell scripts, running Tomcat and Java, and that kind of thing.

    Read the article

  • JDK bug migration: bugs.sun.com now backed by JIRA

    - by darcy
    The JDK bug migration from a Sun legacy system to JIRA has reached another planned milestone: the data displayed on bugs.sun.com is now backed by JIRA rather than by the legacy system. Besides maintaining the URLs to old bugs, bugs filed since the migration to JIRA are now visible too. The basic information presented about a bug is the same as before, but reformatted and using JIRA terminology: Instead of a "category", a bug now has a "component / subcomponent" classification. As outlined previously, part of the migration effort was reclassifying bugs according to a new classification scheme; I'll write more about the new scheme in a subsequent blog post. Instead of a list of JDK versions a bug is "reported against," there is a list of "affected versions." The names of the JDK versions have largely been regularized; code names like "tiger" and "mantis" have been replaced by the release numbers like "5.0" and "1.4.2". Instead of "release fixed," there are now "Fixed Versions." The legacy system had many fields that could hold a sequence of text entries, including "Description," "Workaround", and "Evaluation." JIRA instead only has two analogous fields labeled as "Description" and a unified stream of "Comments." Nearly coincident with switching to JIRA, we also enabled an agent which automatically updates a JIRA issue in response to pushes into JDK-related Hg repositories. These comments include the changeset URL, the user making the push, and a time stamp. These comments are first added when a fix is pushed to a team integration repository and then added again when the fix is pushed into the master repository for a release. We're still in early days of production usage of JIRA for JDK bug tracking, but the transition to production went smoothly and over 1,000 new issues have already been filed. Many other facets of the migration are still in the works, including hosting new incidents filed at bugs.sun.com in a tailored incidents project in JIRA.

    Read the article

  • SQL SERVER – Caption the Cartoon Contest – Last 2 Days

    - by pinaldave
    Developer’s life is very interesting, we often want to start my day early at a job so we can go home early. However, the day never comes as the life of the developer is always about working late hours. If the developer goes to the office early – there are good chances that his co-workers will come late. Additionally, I am confident that there will be always something urgent for developers or DBA to solve right at the time they are ready to go home. This is the life of the developers!  Here is the interesting story of a DBA who was about to go to the home. He had to take his girlfriend to a movie and dinner in 30 minutes. However, his manager asks him to fix the performance related issues with their production server. In normal case, he had only two choices a) Job or b) Girlfriend. Well, our super hero DBA decided to use efficient tools and improve the performance of the production server in merely 30 minutes. When he was done, his manager was absolutely surprised by his efficiency and accuracy of the work. He asked him following question - Here is the contest – you need to guess what was the answer of our Super Hero DBA. If you guess the answer correct you may win Star Wars R2-D2 Inflatable Remote Controlled device. Additionally, if you Download DB Optimizer before Dec 8, 2012 – you will be eligible for USD 25 Amazon Gift Card (there are total 10 such awards). Please do not leave comments in this thread – to participate in the contest – please leave a comment here in the original contest page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • Now Released: "Oracle WebLogic Server 12c: Administration I" Certification Exam (1Z0-133)

    - by Brandye Barrington
    Oracle Certification is pleased to announce the production release of the Oracle WebLogic Server 12c: Administration I Certification Exam (1Z0-133). Passing this exam results in the "Oracle Certified Associate (OCA) - Oracle WebLogic Server 12c Administrator" certification. Oracle WebLogic Server sets the industry standard for Java application servers, and the "Oracle Certified Associate (OCA) - Oracle WebLogic Server 12c Administrator" certification sets the standard for WLS administrators. Obtaining this certification proves that you have the skills to set up server environments, tune performance and troubleshoot with confidence and raises the bar for your peers.  While training is not required for certification, the Oracle WebLogic Server 12c: Administration I course from Oracle University, can expedite you towards your certification - helping you gain the skills and knowledge to increase the performance and scalability of your organization’s applications and services with the #1 application server. Becoming certified gives you a competitive edge through proven expertise. The Oracle WebLogic Server 12c: Administration I exam (1Z0-133) is now available in production. Get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification website. Register now for exam 1Z0-133 at www.pearsonvue.com/oracle. QUICK LINKS: Certification Track: Oracle Certified Associate (OCA) - Oracle WebLogic Server 12c Administrator Certification Exam: Oracle WebLogic Server 12c: Administration I Certification Exam (1Z0-133) Recommended Training: Oracle WebLogic Server 12c: Administration I Register Now: Pearson VUE

    Read the article

  • JDeveloper News &ndash; Did You Know

    - by shay.shmeltzer
    There have been a few issues lately with access to blogs.oracle.com to write new messages – and as a result I’m a little behind on reporting the latest and greatest in JDeveloper. (I’m also unable to approve comments :-( ) But just in case you missed it here are a few note worthy things you should know: ADF Mobile Client went production – this is a solution that let you use ADF to develop on-device applications for mobile devices. You develop once and run on various devices (right now Blackberry and Windows Mobile with other platforms coming soon). For more information check out the ADF Mobile page on OTN. ADF Developer Certification goes production – you can now take the official test and get an official certification that you can include in your resume.Should be a must for any consultant looking to get ADF related gigs. Learn more about the ADF Certification Exam. ADF Vision Stencils Released – If you are looking for a quick way to draw prototypes of ADF screens you probably can’t get any faster and more accurate results than this. This is a set of Vision components that you can drag over and design a page visually. You can also set properties for components and do other advance things. You do need a license for Visio to use this, but the ADF stencils are free. We’ve been using these internally at Oracle for some time now and we thought the ADF community would enjoy these too. Download the ADF Visio Stencils here (and watch a youtube demo of how they work).

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • At the Java DEMOgrounds - ZeroTurnaround and its LiveRebel 2.5

    - by Janice J. Heiss
    At the ZeroTurnaround demo, I spoke with Krishnan Badrinarayanan, their Product Marketing Manager. ZeroTurnaround, the creator of JRebel and LiveRebel, describes itself on their site as a company “dedicated to changing the way the world develops, tests and runs Java applications."“We just launched LiveRebel 2.5 today,” stated Badrinarayanan, “which enables companies to embrace the concept and practice of continuous delivery, which means having a pipeline that takes products right from the developers to an end-user, faster, more frequently -- all the while ensuring that it’s a quality product that does not break in production. So customers don’t feel the discontinuity that something has changed under them and that they can’t deal with the change. And all this happens while there is zero down time.”He pointed out that Salesforce.com is not useable from 3 a.m. to 5 a.m. on Saturday because they are engaged in maintenance. “With LiveRebel 2.5, you can unify the whole delivery chain without having any downtime at all,” he said. “There are many products that tell customers to take their tools and change how they work as an organization so that you they have to conform to the way the tool prescribes them to work as an application team. We take a more pragmatic approach. A lot of companies might use Jenkins or Bamboo to do continuous integration. We extend that. We say, take our product, take LiveRebel okay, and integrate it with Jenkins – you can do that quickly, so that, in half a day, you will be up and running. And let LiveRebel automate your deployment processes and all the automated tasks that go with it. Right from tests to the staging environment to production -- all with zero downtime and with no impact on users currently using the system.” “So if you were to make the update right now and you had 100 users on your system, they would not even know this was happening. It would maintain their sessions and transfer them over to the new version, all in the background.”

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Should developers do their own software releases (if there is a prod support team in place)?

    - by leora
    I know there are going to always be differences depending on the particular size, staff etc, but i wanted to get feedback in general around: In an environment where you have a production support team doing first line support and release management, is it better to simply have developers manage their own releases instead? In this case, its internal software at an insurance company but the question should be valid at any company, size, etc I think. Currently, we have our production team do releases but there is an argument that its inefficient and that if you allowed developers the ability to do it, they will focus more on making it simple and efficient and avoid basically passing on scripts, etc to run to another team. The counter argument is that if you don't have a check and balance, you could get a software team (or an individual) that doesn't a very hacky job about getting their software out there (making on the fly changes, not documenting the process, etc) and that by forcing the prod support team to do the actual release, it enforces consistency and proper checks and balances. I know this is not a black or white issue but I wanted to see what folks thought on this so the discipline and consistency is there but without the feeling that an inefficient process is in place.

    Read the article

  • How would you manage development between many Staging branches?

    - by Trip
    We have a Staging Branch. then we came out with a Beta branch for users to move whenever they wanted to from old Production branch to the new features. Our plan seemed simple, we test on Staging, when items get QA'd, they get cherry-picked and deploy to Beta. Here's the problem! A bug will discreetly make its way on to Beta, and since Beta is a production environment, it needs fixes fast and accurate. But not all the QA's got done. Enter Git hell.. So I find a problem on Beta. No sweat, its already been fixed on Staging, but when I go to cherry-pick the item over, Beta barely has any of the other pre-requisites of code to implement this small change. Now Beta has a little here and a little there, and I can't imagine it as a code base being as stable as Staging. What's more, is I'm dealing with some insane Git conflicts, and having to monkey patch a bunch of things to make up for what Beta hasn't caught up with Staging. Can someone polite or non-polite terms, tell me what we're doing wrong here as far as assembling this project? Any awesome recommendations or workarounds or alternatives to the system we came up with?

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • ??11.2 RAC??OCR?Votedisk??ASM Diskgroup?????

    - by Liu Maclean(???)
    ????????Oracle Allstarts??????????ocr?votedisk?ASM diskgroup??11gR2 RAC cluster?????????,????«?11gR2 RAC???ASM DISK Path????»??????,??????CRS??????11.2??ASM???????, ????????????”crsctl start crs -excl -nocrs “; ?????????,??ASM????ocr?????votedisk?????,??11.2????ocr?votedisk???ASM?,?ASM???????ocr?votedisk,?????ocr?votedisk????????cluter??????;???????????CRS????,?????diskgroup??????????,?????????????????? ??:?????????????????ASM LUN DISK,???OCR?????,????????4??????????,???????$GI_HOME,?????????;????votedisk?? ????: ??dd????ocr?votedisk??diskgroup header,??diskgroup corruption: 1. ??votedisk? ocr?? [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a853d6204bbc4feabfd8c73d4c3b3001 (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE a5b37704c3574f0fbf21d1d9f58c4a6b (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 36e5c51ff0294fc3bf2a042266650331 (/dev/asm-diski) [SYSTEMDG] 4. ONLINE af337d1512824fe4bf6ad45283517aaa (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3c4a349e2e304ff6bf64b2b1c9d9cf5d (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). su - grid [grid@vrh1 ~]$ ocrconfig -showbackup PROT-26: Oracle Cluster Registry backup locations were retrieved from a local copy vrh1 2012/08/09 01:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup00.ocr vrh1 2012/08/08 21:59:56 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup01.ocr vrh1 2012/08/08 17:59:55 /g01/11.2.0/maclean/grid/cdata/vrh-cluster/backup02.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/day.ocr vrh1 2012/08/08 05:59:54 /g01/11.2.0/grid/cdata/vrh-cluster/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available 2. ??????????clusterware ,OHASD crsctl stop has -f 3. GetAsmDH.sh ==> GetAsmDH.sh?ASM disk header????? ????????,????????asm header [grid@vrh1 ~]$ ./GetAsmDH.sh ############################################ 1) Collecting Information About the Disks: ############################################ SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 03:28:13 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected. SQL> SQL> SQL> SQL> SQL> SQL> SQL> 1 0 /dev/asm-diske 1 1 /dev/asm-diskd 2 0 /dev/asm-diskb 2 1 /dev/asm-diskc 2 2 /dev/asm-diskf 3 0 /dev/asm-diskh 3 1 /dev/asm-diskg 3 2 /dev/asm-diski 3 3 /dev/asm-diskj 3 4 /dev/asm-diskk SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options -rw-r--r-- 1 grid oinstall 1048 Aug 9 03:28 /tmp/HC/asmdisks.lst ############################################ 2) Generating asm_diskh.sh script. ############################################ -rwx------ 1 grid oinstall 666 Aug 9 03:28 /tmp/HC/asm_diskh.sh ############################################ 3) Executing asm_diskh.sh script to generate dd dumps. ############################################ -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_1_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_2_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_0.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_1.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_2.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_3.dd -rw-r--r-- 1 grid oinstall 1048576 Aug 9 03:28 /tmp/HC/dsk_3_4.dd ############################################ 4) Compressing dd dumps in the next format: (asm_dd_header_all_.tar) ############################################ /tmp/HC/dsk_1_0.dd /tmp/HC/dsk_1_1.dd /tmp/HC/dsk_2_0.dd /tmp/HC/dsk_2_1.dd /tmp/HC/dsk_2_2.dd /tmp/HC/dsk_3_0.dd /tmp/HC/dsk_3_1.dd /tmp/HC/dsk_3_2.dd /tmp/HC/dsk_3_3.dd /tmp/HC/dsk_3_4.dd ./GetAsmDH.sh: line 81: compress: command not found ls: /tmp/HC/*.Z: No such file or directory [grid@vrh1 ~]$ 4. ??dd ?? ??ocr?votedisk??diskgroup [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskh bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00423853 seconds, 247 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskg bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0045179 seconds, 232 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diski bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00469976 seconds, 223 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskj bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00344262 seconds, 305 MB/s [root@vrh1 ~]# dd if=/dev/zero of=/dev/asm-diskk bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0053518 seconds, 196 MB/s 5. ????????????HAS [root@vrh1 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. ????ocr?votedisk??diskgroup??,??CSS???????,???????: alertvrh1.log [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:41.207 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:35:56.240 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:11.284 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:26.305 [cssd(5162)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /g01/11.2.0/grid/log/vrh1/cssd/ocssd.log 2012-08-09 03:36:41.328 ocssd.log 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmReadDiscoveryProfile: voting file discovery string(/dev/asm*) 2012-08-09 03:40:26.662: [ CSSD][1078700352]clssnmvDDiscThread: using discovery string /dev/asm* for initial discovery 2012-08-09 03:40:26.662: [ SKGFD][1078700352]Discovery with str:/dev/asm*: 2012-08-09 03:40:26.662: [ SKGFD][1078700352]UFS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskb: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskj: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskh: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskc: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskd: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diske: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskg: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diski: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Fetching UFS disk :/dev/asm-diskk: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]OSS discovery with :/dev/asm*: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xdf22a0 from lib :UFS:: for disk :/dev/asm-diskf: 2012-08-09 03:40:26.665: [ SKGFD][1078700352]Handle 0xf412a0 from lib :UFS:: for disk :/dev/asm-diskb: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf3a680 from lib :UFS:: for disk :/dev/asm-diskj: 2012-08-09 03:40:26.666: [ SKGFD][1078700352]Handle 0xf93da0 from lib :UFS:: for disk :/dev/asm-diskh: 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvDiskVerify: Successful discovery of 0 disks 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery 2012-08-09 03:40:26.667: [ CSSD][1078700352]clssnmvFindInitialConfigs: No voting files found 2012-08-09 03:40:26.667: [ CSSD][1078700352](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds ?????ocr?votedisk??diskgroup?????: 1. ?-excl -nocrs ????cluster,??????ASM?? ????CRS [root@vrh1 vrh1]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'vrh1' CRS-2676: Start of 'ora.mdnsd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'vrh1' CRS-2676: Start of 'ora.gpnpd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vrh1' CRS-2672: Attempting to start 'ora.gipcd' on 'vrh1' CRS-2676: Start of 'ora.cssdmonitor' on 'vrh1' succeeded CRS-2676: Start of 'ora.gipcd' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'vrh1' CRS-2672: Attempting to start 'ora.diskmon' on 'vrh1' CRS-2676: Start of 'ora.diskmon' on 'vrh1' succeeded CRS-2676: Start of 'ora.cssd' on 'vrh1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2672: Attempting to start 'ora.ctssd' on 'vrh1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'vrh1' CRS-2676: Start of 'ora.ctssd' on 'vrh1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'vrh1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'vrh1' CRS-2676: Start of 'ora.asm' on 'vrh1' succeeded 2.???ocr?votedisk??diskgroup,??compatible.asm???11.2: [root@vrh1 vrh1]# su - grid [grid@vrh1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 9 04:16:58 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> create diskgroup systemdg high redundancy disk '/dev/asm-diskh','/dev/asm-diskg','/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.2'; 3.?ocr backup???ocr??ocrcheck??: [root@vrh1 ~]# ocrconfig -restore /g01/11.2.0/grid/cdata/vrh-cluster/backup00.ocr [root@vrh1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3180 Available space (kbytes) : 258940 ID : 1238458014 Device/File Name : +systemdg Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded 4. ????votedisk ,??????????: [grid@vrh1 ~]$ crsctl replace votedisk +SYSTEMDG CRS-4602: Failed 27 to add voting file 2e4e0fe285924f86bf5473d00dcc0388. CRS-4602: Failed 27 to add voting file 4fa54bb0cc5c4fafbf1a9be5479bf389. CRS-4602: Failed 27 to add voting file a109ead9ea4e4f28bfe233188623616a. CRS-4602: Failed 27 to add voting file 042c9fbd71b54f5abfcd3ab3408f3cf3. CRS-4602: Failed 27 to add voting file 7b5a8cd24f954fafbf835ad78615763f. Failed to replace voting disk group with +SYSTEMDG. CRS-4000: Command Replace failed, or completed with errors. ????????ASM???,???ASM: SQL> alter system set asm_diskstring='/dev/asm*'; System altered. SQL> create spfile from memory; File created. SQL> startup force mount; ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance ASM instance started Total System Global Area 283930624 bytes Fixed Size 2227664 bytes Variable Size 256537136 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /g01/11.2.0/grid/dbs/spfile+AS M1.ora [grid@vrh1 trace]$ crsctl replace votedisk +SYSTEMDG CRS-4256: Updating the profile Successful addition of voting disk 85edc0e82d274f78bfc58cdc73b8c68a. Successful addition of voting disk 201ffffc8ba44faabfe2efec2aa75840. Successful addition of voting disk 6f2a25c589964faabf6980f7c5f621ce. Successful addition of voting disk 93eb315648454f25bf3717df1a2c73d5. Successful addition of voting disk 3737240678964f88bfbfbd31d8b3829f. Successfully replaced voting disk group with +SYSTEMDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced 5. ??has??,??cluster????: [root@vrh1 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@vrh1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 85edc0e82d274f78bfc58cdc73b8c68a (/dev/asm-diskh) [SYSTEMDG] 2. ONLINE 201ffffc8ba44faabfe2efec2aa75840 (/dev/asm-diskg) [SYSTEMDG] 3. ONLINE 6f2a25c589964faabf6980f7c5f621ce (/dev/asm-diski) [SYSTEMDG] 4. ONLINE 93eb315648454f25bf3717df1a2c73d5 (/dev/asm-diskj) [SYSTEMDG] 5. ONLINE 3737240678964f88bfbfbd31d8b3829f (/dev/asm-diskk) [SYSTEMDG] Located 5 voting disk(s). [root@vrh1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.BACKUPDG.dg ONLINE ONLINE vrh1 ora.DATA.dg ONLINE ONLINE vrh1 ora.LISTENER.lsnr ONLINE ONLINE vrh1 ora.LSN_MACLEAN.lsnr ONLINE ONLINE vrh1 ora.SYSTEMDG.dg ONLINE ONLINE vrh1 ora.asm ONLINE ONLINE vrh1 Started ora.gsd OFFLINE OFFLINE vrh1 ora.net1.network ONLINE ONLINE vrh1 ora.ons ONLINE ONLINE vrh1 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr http://www.askmaclean.com 1 ONLINE ONLINE vrh1 ora.cvu 1 OFFLINE OFFLINE ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE vrh1 ora.vprod.db 1 ONLINE OFFLINE 2 ONLINE OFFLINE ora.vrh1.vip 1 ONLINE ONLINE vrh1 ora.vrh2.vip 1 ONLINE INTERMEDIATE vrh1 FAILED OVER

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • SQL SERVER – An Efficiency Tool to Compare and Synchronize SQL Server Databases

    - by Pinal Dave
    There is no need to reinvent the wheel if it is already invented and if the wheel is already available at ease, there is no need to wait to grab it. Here is the similar situation. I came across a very interesting situation and I had to look for an efficient tool which can make my life easier and solve my business problem. Here is the scenario. One of the developers had deleted few rows from the very important mapping table of our development server (thankfully, it was not the production server). Though it was a development server, the entire development team had to stop working as the application started to crash on every page. Think about the lost of manpower and efficiency which we started to loose.  Pretty much every department had to stop working as our internal development application stopped working. Thankfully, we even take a backup of our development server and we had access to full backup of the entire database at 6 AM morning. We do not take as a frequent backup of development server as production server (naturally!). Even though we had a full backup, the solution was not to restore the database. Think about it, there were plenty of the other operations since the last good full backup and if we restore a full backup, we will pretty much overwrite on the top of the work done by developers since morning. Now, as restoring the full backup was not an option we decided to restore the same database on another server. Once we had restored our database to another server, the challenge was to compare the table from where the database was deleted. The mapping table from where the data were deleted contained over 5000 rows and it was humanly impossible to compare both the tables manually. Finally we decided to use efficiency tool dbForge Data Compare for SQL Server from DevArt. dbForge Data Compare for SQL Server is a powerful, fast and easy to use SQL compare tool, capable of using native SQL Server backups as metadata source. (FYI we Downloaded dbForge Data Compare) Once we discovered the product, we immediately downloaded the product and installed on our development server. After we installed the product, we were greeted with the following screen. We clicked on the New Data Comparision to start our new comparison project. It brought up following screen. Here is the best part of the product, we just had to enter our database connection username and password along with source and destination details and we are done. The entire process is very simple and self intuiting. The best part was that for the source, we can either select database or even backup. This was indeed fantastic feature. Think about this, if you have a very big database, it will take long time to restore on the server. Once it is restored, you will be able to work with it. However, when you are working with dbForge Data Compare it will accept database backup as your source or destination. Once I click on the execute it brought up following screen where it displayed an excellent summary of the data compare. It has dedicated tabs for the what is changing in what table as well had details of the changed data. The best part is that, once we had reviewed the change. We click on the Synchronize button in the menu bar and it brought up following screen. You can see that the screen has very simple straight forward but very powerful features. You can generate a script to synchronize from target to source or even from source to target. Additionally, the database is a very complicated world and there are extensive options to configure various database options on the next screen. We also have the option to either generate script or directly execute the script to target server. I like to play on the safe side and I generated the script for my synchronization and later on after review I deployed the scripts on the server. Well, my team and we were able to get going from our disaster in less than 10 minutes. There were few people in our team were indeed disappointed as they were thinking of going home early that day but in less than 10 minutes they had to get back to work. There are so many other features in  dbForge Data Compare for SQL Server, I am already planning to make this product company wide recommended product for Data Compare tool. Hats off to the team who have build this product. Here are few of the features salient features of the dbForge Data Compare for SQL Server Perform SQL Server database comparison to detect changes Compare SQL Server backups with live databases Analyze data differences between two databases Synchronize two databases that went out of sync Restore data of a particular table from the backup Generate data comparison reports in Excel and HTML formats Copy look-up data from development database to production Automate routine data synchronization tasks with command-line interface Go Ahead and Download the dbForge Data Compare for SQL Server right away. It is always a good idea to get familiar with the important tools before hand instead of learning it under pressure of disaster. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >