Search Results

Search found 148 results on 6 pages for 'maria mateescu'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Oracle OpenWorld 2012

    - by Maria Colgan
    I can't believe it's time for OpenWorld again! Oracle OpenWorld is the largest gathering of Oracle customers, partners, developers, and technology enthusiasts. This year it will take place between September 30th and October 4th in San Francisco. Of course, the Optimizer development group will be there and you will have multiple opportunities to meet the team, in one of our technical sessions, or at the Oracle Database demogrounds. This year the Optimizer team has 2 technical sessions, as well as a booth in the Oracle Database demogrounds. Tuesday, October 2nd at 1:15pm Oracle Optimizer: Harnessing the Power of Optimizer Hints Session CON8455 at Moscone South - room 103 In this session we will discuss in detail how optimizer hints are interpreted, when they should be used, and why they sometimes appear to be ignored. Thursday, October 4th at 12:45pm Oracle Optimizer: An Insider’s View of How the Optimizer Works Session CON8457 at Moscone South - room 104This session explains how the latest version of the optimizer works and the best ways you can influence its decisions to ensure you get optimal execution every time. It will also include a full history of the Cost Based Optimizer, so make sure you stick around for this one! If you have burning Optimizer or statistics related questions, or if you just want to pick up an Optimizer bumper sticker, you can stop by the Optimizer demo booth. This year we are located in booth 3157, in the Database area of the demogrounds, in Moscone South. Members of the Optimizer development team will be there Monday through Wednesday from 9:45 am until 6pm. The full Oracle OpenWorld catalog is on-line, or you can browse by speakers by name. So start planning your trip today! +Maria Colgan

    Read the article

  • How to remove a package entirely?

    - by maria
    Hi I'm quite new to Linux, but before using it I was hearing that Windows programs, after uninstallation, leaves a lot of remains on the hard disc, and Linux removes all. I'm using Ubuntu 10.04. To uninstall packages I'm using sudo apt-get autoremove application_name or sudo aptitude purge application_name. Recently I have installed texlive-full and for some reasons I had quickly to uninstall it. After I've entered to terminal updatedb, then locate *texlive* and the output was very long: maria@marysia-ubuntu:~$ locate *texlive* /etc/texmf/fmt.d/10texlive-base.cnf /etc/texmf/fmt.d/10texlive-formats-extra.cnf /etc/texmf/fmt.d/10texlive-lang-cyrillic.cnf /etc/texmf/fmt.d/10texlive-lang-czechslovak.cnf /etc/texmf/fmt.d/10texlive-lang-polish.cnf /etc/texmf/fmt.d/10texlive-latex-base.cnf /etc/texmf/fmt.d/10texlive-math-extra.cnf /etc/texmf/fmt.d/10texlive-metapost.cnf /etc/texmf/fmt.d/10texlive-omega.cnf /etc/texmf/fmt.d/10texlive-xetex.cnf /etc/texmf/hyphen.d/09texlive-base.cnf /etc/texmf/hyphen.d/10texlive-lang-arabic.cnf /etc/texmf/hyphen.d/10texlive-lang-croatian.cnf /etc/texmf/hyphen.d/10texlive-lang-cyrillic.cnf /etc/texmf/hyphen.d/10texlive-lang-czechslovak.cnf /etc/texmf/hyphen.d/10texlive-lang-danish.cnf /etc/texmf/hyphen.d/10texlive-lang-dutch.cnf /etc/texmf/hyphen.d/10texlive-lang-finnish.cnf /etc/texmf/hyphen.d/10texlive-lang-french.cnf /etc/texmf/hyphen.d/10texlive-lang-german.cnf /etc/texmf/hyphen.d/10texlive-lang-greek.cnf /etc/texmf/hyphen.d/10texlive-lang-hungarian.cnf /etc/texmf/hyphen.d/10texlive-lang-indic.cnf /etc/texmf/hyphen.d/10texlive-lang-italian.cnf /etc/texmf/hyphen.d/10texlive-lang-latin.cnf /etc/texmf/hyphen.d/10texlive-lang-latvian.cnf /etc/texmf/hyphen.d/10texlive-lang-lithuanian.cnf /etc/texmf/hyphen.d/10texlive-lang-mongolian.cnf /etc/texmf/hyphen.d/10texlive-lang-norwegian.cnf /etc/texmf/hyphen.d/10texlive-lang-other.cnf /etc/texmf/hyphen.d/10texlive-lang-polish.cnf /etc/texmf/hyphen.d/10texlive-lang-portuguese.cnf /etc/texmf/hyphen.d/10texlive-lang-spanish.cnf /etc/texmf/hyphen.d/10texlive-lang-swedish.cnf /etc/texmf/hyphen.d/10texlive-lang-ukenglish.cnf /etc/texmf/updmap.d/10texlive-base.cfg /etc/texmf/updmap.d/10texlive-fonts-extra.cfg /etc/texmf/updmap.d/10texlive-fonts-recommended.cfg /etc/texmf/updmap.d/10texlive-games.cfg /etc/texmf/updmap.d/10texlive-lang-african.cfg /etc/texmf/updmap.d/10texlive-lang-arabic.cfg /etc/texmf/updmap.d/10texlive-lang-cyrillic.cfg /etc/texmf/updmap.d/10texlive-lang-czechslovak.cfg /etc/texmf/updmap.d/10texlive-lang-french.cfg /etc/texmf/updmap.d/10texlive-lang-greek.cfg /etc/texmf/updmap.d/10texlive-lang-hebrew.cfg /etc/texmf/updmap.d/10texlive-lang-indic.cfg /etc/texmf/updmap.d/10texlive-lang-lithuanian.cfg /etc/texmf/updmap.d/10texlive-lang-mongolian.cfg /etc/texmf/updmap.d/10texlive-lang-polish.cfg /etc/texmf/updmap.d/10texlive-lang-vietnamese.cfg /etc/texmf/updmap.d/10texlive-latex-base.cfg /etc/texmf/updmap.d/10texlive-latex-extra.cfg /etc/texmf/updmap.d/10texlive-math-extra.cfg /etc/texmf/updmap.d/10texlive-omega.cfg /etc/texmf/updmap.d/10texlive-pictures.cfg /etc/texmf/updmap.d/10texlive-science.cfg /var/cache/apt/archives/texlive-base_2009-7_all.deb /var/cache/apt/archives/texlive-bibtex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-binaries_2009-5ubuntu0.2_i386.deb /var/cache/apt/archives/texlive-common_2009-7_all.deb /var/cache/apt/archives/texlive-doc-base_2009-2_all.deb /var/cache/apt/archives/texlive-doc-bg_2009-2_all.deb /var/cache/apt/archives/texlive-doc-cs+sk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-de_2009-2_all.deb /var/cache/apt/archives/texlive-doc-en_2009-2_all.deb /var/cache/apt/archives/texlive-doc-es_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-it_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ja_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ko_2009-2_all.deb /var/cache/apt/archives/texlive-doc-mn_2009-2_all.deb /var/cache/apt/archives/texlive-doc-nl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pt_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ru_2009-2_all.deb /var/cache/apt/archives/texlive-doc-si_2009-2_all.deb /var/cache/apt/archives/texlive-doc-th_2009-2_all.deb /var/cache/apt/archives/texlive-doc-tr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-uk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-vi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-zh_2009-2_all.deb /var/cache/apt/archives/texlive-extra-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-font-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-fonts-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-formats-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-full_2009-7_all.deb /var/cache/apt/archives/texlive-games_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-humanities-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-humanities_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-lang-african_2009-3_all.deb /var/cache/apt/archives/texlive-lang-arabic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-armenian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-croatian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-cyrillic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-czechslovak_2009-3_all.deb /var/cache/apt/archives/texlive-lang-danish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-dutch_2009-3_all.deb /var/cache/apt/archives/texlive-lang-finnish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-french_2009-3_all.deb /var/cache/apt/archives/texlive-lang-german_2009-3_all.deb /var/cache/apt/archives/texlive-lang-greek_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hebrew_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hungarian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-indic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-italian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latin_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latvian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-lithuanian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-mongolian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-norwegian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-other_2009-3_all.deb /var/cache/apt/archives/texlive-lang-polish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-portuguese_2009-3_all.deb /var/cache/apt/archives/texlive-lang-spanish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-swedish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-tibetan_2009-3_all.deb /var/cache/apt/archives/texlive-lang-ukenglish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-vietnamese_2009-3_all.deb /var/cache/apt/archives/texlive-latex-base-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-base_2009-7_all.deb /var/cache/apt/archives/texlive-latex-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-latex3_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-luatex_2009-7_all.deb /var/cache/apt/archives/texlive-math-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-metapost-doc_2009-7_all.deb /var/cache/apt/archives/texlive-metapost_2009-7_all.deb /var/cache/apt/archives/texlive-music_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-omega_2009-7_all.deb /var/cache/apt/archives/texlive-pictures-doc_2009-7_all.deb /var/cache/apt/archives/texlive-pictures_2009-7_all.deb /var/cache/apt/archives/texlive-plain-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-xetex_2009-7_all.deb /var/cache/apt/archives/texlive_2009-7_all.deb /var/lib/dpkg/info/texlive-base.list /var/lib/dpkg/info/texlive-base.postrm /var/lib/dpkg/info/texlive-bibtex-extra.list /var/lib/dpkg/info/texlive-bibtex-extra.postrm /var/lib/dpkg/info/texlive-doc-base.list /var/lib/dpkg/info/texlive-doc-base.postrm /var/lib/dpkg/info/texlive-doc-bg.list /var/lib/dpkg/info/texlive-doc-bg.postrm /var/lib/dpkg/info/texlive-doc-cs+sk.list /var/lib/dpkg/info/texlive-doc-cs+sk.postrm /var/lib/dpkg/info/texlive-doc-de.list /var/lib/dpkg/info/texlive-doc-de.postrm /var/lib/dpkg/info/texlive-doc-en.list /var/lib/dpkg/info/texlive-doc-en.postrm /var/lib/dpkg/info/texlive-doc-es.list /var/lib/dpkg/info/texlive-doc-es.postrm /var/lib/dpkg/info/texlive-doc-fi.list /var/lib/dpkg/info/texlive-doc-fi.postrm /var/lib/dpkg/info/texlive-doc-fr.list /var/lib/dpkg/info/texlive-doc-fr.postrm /var/lib/dpkg/info/texlive-doc-it.list /var/lib/dpkg/info/texlive-doc-it.postrm /var/lib/dpkg/info/texlive-doc-ja.list /var/lib/dpkg/info/texlive-doc-ja.postrm /var/lib/dpkg/info/texlive-doc-ko.list /var/lib/dpkg/info/texlive-doc-ko.postrm /var/lib/dpkg/info/texlive-doc-mn.list /var/lib/dpkg/info/texlive-doc-mn.postrm /var/lib/dpkg/info/texlive-doc-nl.list /var/lib/dpkg/info/texlive-doc-nl.postrm /var/lib/dpkg/info/texlive-doc-pl.list /var/lib/dpkg/info/texlive-doc-pl.postrm /var/lib/dpkg/info/texlive-doc-pt.list /var/lib/dpkg/info/texlive-doc-pt.postrm /var/lib/dpkg/info/texlive-doc-ru.list /var/lib/dpkg/info/texlive-doc-ru.postrm /var/lib/dpkg/info/texlive-doc-si.list /var/lib/dpkg/info/texlive-doc-si.postrm /var/lib/dpkg/info/texlive-doc-th.list /var/lib/dpkg/info/texlive-doc-th.postrm /var/lib/dpkg/info/texlive-doc-tr.list /var/lib/dpkg/info/texlive-doc-tr.postrm /var/lib/dpkg/info/texlive-doc-uk.list /var/lib/dpkg/info/texlive-doc-uk.postrm /var/lib/dpkg/info/texlive-doc-vi.list /var/lib/dpkg/info/texlive-doc-vi.postrm /var/lib/dpkg/info/texlive-doc-zh.list /var/lib/dpkg/info/texlive-doc-zh.postrm /var/lib/dpkg/info/texlive-extra-utils.list /var/lib/dpkg/info/texlive-extra-utils.postrm /var/lib/dpkg/info/texlive-font-utils.list /var/lib/dpkg/info/texlive-font-utils.postrm /var/lib/dpkg/info/texlive-fonts-extra-doc.list /var/lib/dpkg/info/texlive-fonts-extra-doc.postrm /var/lib/dpkg/info/texlive-fonts-extra.list /var/lib/dpkg/info/texlive-fonts-extra.postrm /var/lib/dpkg/info/texlive-fonts-recommended-doc.list /var/lib/dpkg/info/texlive-fonts-recommended-doc.postrm /var/lib/dpkg/info/texlive-fonts-recommended.list /var/lib/dpkg/info/texlive-fonts-recommended.postrm /var/lib/dpkg/info/texlive-formats-extra.list /var/lib/dpkg/info/texlive-formats-extra.postrm /var/lib/dpkg/info/texlive-games.list /var/lib/dpkg/info/texlive-games.postrm /var/lib/dpkg/info/texlive-generic-extra.list /var/lib/dpkg/info/texlive-generic-extra.postrm /var/lib/dpkg/info/texlive-generic-recommended.list /var/lib/dpkg/info/texlive-generic-recommended.postrm /var/lib/dpkg/info/texlive-humanities-doc.list /var/lib/dpkg/info/texlive-humanities-doc.postrm /var/lib/dpkg/info/texlive-humanities.list /var/lib/dpkg/info/texlive-humanities.postrm /var/lib/dpkg/info/texlive-lang-african.list /var/lib/dpkg/info/texlive-lang-african.postrm /var/lib/dpkg/info/texlive-lang-arabic.list /var/lib/dpkg/info/texlive-lang-arabic.postrm /var/lib/dpkg/info/texlive-lang-armenian.list /var/lib/dpkg/info/texlive-lang-armenian.postrm /var/lib/dpkg/info/texlive-lang-croatian.list /var/lib/dpkg/info/texlive-lang-croatian.postrm /var/lib/dpkg/info/texlive-lang-cyrillic.list /var/lib/dpkg/info/texlive-lang-cyrillic.postrm /var/lib/dpkg/info/texlive-lang-czechslovak.list /var/lib/dpkg/info/texlive-lang-czechslovak.postrm /var/lib/dpkg/info/texlive-lang-danish.list /var/lib/dpkg/info/texlive-lang-danish.postrm /var/lib/dpkg/info/texlive-lang-dutch.list /var/lib/dpkg/info/texlive-lang-dutch.postrm /var/lib/dpkg/info/texlive-lang-finnish.list /var/lib/dpkg/info/texlive-lang-finnish.postrm /var/lib/dpkg/info/texlive-lang-french.list /var/lib/dpkg/info/texlive-lang-french.postrm /var/lib/dpkg/info/texlive-lang-german.list /var/lib/dpkg/info/texlive-lang-german.postrm /var/lib/dpkg/info/texlive-lang-greek.list /var/lib/dpkg/info/texlive-lang-greek.postrm /var/lib/dpkg/info/texlive-lang-hebrew.list /var/lib/dpkg/info/texlive-lang-hebrew.postrm /var/lib/dpkg/info/texlive-lang-hungarian.list /var/lib/dpkg/info/texlive-lang-hungarian.postrm /var/lib/dpkg/info/texlive-lang-indic.list /var/lib/dpkg/info/texlive-lang-indic.postrm /var/lib/dpkg/info/texlive-lang-italian.list /var/lib/dpkg/info/texlive-lang-italian.postrm /var/lib/dpkg/info/texlive-lang-latin.list /var/lib/dpkg/info/texlive-lang-latin.postrm /var/lib/dpkg/info/texlive-lang-latvian.list /var/lib/dpkg/info/texlive-lang-latvian.postrm /var/lib/dpkg/info/texlive-lang-lithuanian.list /var/lib/dpkg/info/texlive-lang-lithuanian.postrm /var/lib/dpkg/info/texlive-lang-mongolian.list /var/lib/dpkg/info/texlive-lang-mongolian.postrm /var/lib/dpkg/info/texlive-lang-norwegian.list /var/lib/dpkg/info/texlive-lang-norwegian.postrm /var/lib/dpkg/info/texlive-lang-other.list /var/lib/dpkg/info/texlive-lang-other.postrm /var/lib/dpkg/info/texlive-lang-polish.list /var/lib/dpkg/info/texlive-lang-polish.postrm /var/lib/dpkg/info/texlive-lang-portuguese.list /var/lib/dpkg/info/texlive-lang-portuguese.postrm /var/lib/dpkg/info/texlive-lang-spanish.list /var/lib/dpkg/info/texlive-lang-spanish.postrm /var/lib/dpkg/info/texlive-lang-swedish.list /var/lib/dpkg/info/texlive-lang-swedish.postrm /var/lib/dpkg/info/texlive-lang-tibetan.list /var/lib/dpkg/info/texlive-lang-tibetan.postrm /var/lib/dpkg/info/texlive-lang-ukenglish.list /var/lib/dpkg/info/texlive-lang-ukenglish.postrm /var/lib/dpkg/info/texlive-lang-vietnamese.list /var/lib/dpkg/info/texlive-lang-vietnamese.postrm /var/lib/dpkg/info/texlive-latex-base-doc.list /var/lib/dpkg/info/texlive-latex-base-doc.postrm /var/lib/dpkg/info/texlive-latex-base.list /var/lib/dpkg/info/texlive-latex-base.postrm /var/lib/dpkg/info/texlive-latex-extra-doc.list /var/lib/dpkg/info/texlive-latex-extra-doc.postrm /var/lib/dpkg/info/texlive-latex-extra.list /var/lib/dpkg/info/texlive-latex-extra.postrm /var/lib/dpkg/info/texlive-latex-recommended-doc.list /var/lib/dpkg/info/texlive-latex-recommended-doc.postrm /var/lib/dpkg/info/texlive-latex-recommended.list /var/lib/dpkg/info/texlive-latex-recommended.postrm /var/lib/dpkg/info/texlive-latex3.list /var/lib/dpkg/info/texlive-latex3.postrm /var/lib/dpkg/info/texlive-luatex.list /var/lib/dpkg/info/texlive-luatex.postrm /var/lib/dpkg/info/texlive-math-extra.list /var/lib/dpkg/info/texlive-math-extra.postrm /var/lib/dpkg/info/texlive-metapost-doc.list /var/lib/dpkg/info/texlive-metapost-doc.postrm /var/lib/dpkg/info/texlive-metapost.list /var/lib/dpkg/info/texlive-metapost.postrm /var/lib/dpkg/info/texlive-music.list /var/lib/dpkg/info/texlive-music.postrm /var/lib/dpkg/info/texlive-omega.list /var/lib/dpkg/info/texlive-omega.postrm /var/lib/dpkg/info/texlive-pictures-doc.list /var/lib/dpkg/info/texlive-pictures-doc.postrm /var/lib/dpkg/info/texlive-pictures.list /var/lib/dpkg/info/texlive-pictures.postrm /var/lib/dpkg/info/texlive-plain-extra.list /var/lib/dpkg/info/texlive-plain-extra.postrm /var/lib/dpkg/info/texlive-pstricks-doc.list /var/lib/dpkg/info/texlive-pstricks-doc.postrm /var/lib/dpkg/info/texlive-pstricks.list /var/lib/dpkg/info/texlive-pstricks.postrm /var/lib/dpkg/info/texlive-publishers-doc.list /var/lib/dpkg/info/texlive-publishers-doc.postrm /var/lib/dpkg/info/texlive-publishers.list /var/lib/dpkg/info/texlive-publishers.postrm /var/lib/dpkg/info/texlive-science-doc.list /var/lib/dpkg/info/texlive-science-doc.postrm /var/lib/dpkg/info/texlive-science.list /var/lib/dpkg/info/texlive-science.postrm /var/lib/dpkg/info/texlive-xetex.list /var/lib/dpkg/info/texlive-xetex.postrm maria@marysia-ubuntu:~$ I've used sudo apt-get autoclean without any change. I've installed deborphan and it showed nothing (maybe I've used it in wrong way: just entered command deborphan). Am I doing something wrong or I was told something which is not true? I would like to know two things: how to remove packages (if I'm doing it in wrong way) and how to clean hard disc from remains of all packages I've uninstalled till now (even if I don't remember what it was exactly). I have Ubuntu Tweak installed but I don't know how to use it and I think I prefere terminal commnands. Thanks

    Read the article

  • Recording Topics manually and automatically

    - by maria.cozzolino(at)oracle.com
    When you are recording UPK topics, the default mode for recording is manual recording, where you tell the system when to record each screen shot. This mode allows you to take the exact screen shot you need. However, it does get a bit tedious when you are recording long topics, especially if you forget to take a few screen shots. In UPK 3.5, a new version of recording was introduced - Automatic Recording. It was designed to simplify the recording process by automatically capturing screen shots as you perform your transaction. If you haven't experimented with Automatic Recording, I'd recommend you give it a try - it might make your recording life easier. If you are recording with sound, you can also narrate your topic while recording it. To turn on Automatic Recording: 1. In Tools/Options, there are two recorder tabs. The first tab, under content defaults, includes settings that you may want to share between developers, like whether keyboard shortcuts are automatically captured. 2. The second tab is the one that contains the personal preferences, like screen shot capture key and whether to record automatically or manually. On this tab, choose the option for Automatic Recording. 3. Save the settings. Note that this setting will NOT impact content defaults; this is for your user only. When you launch the recorder, you will notice a slightly different message with guidance on how to start and stop automatic recording. Once you start recording, the recorder window is hidden until the end of the recording session to allow you to capture your transaction. In the task tray, there is a series of icons that let you know that you are capturing content. You can pause the recording, as well as set and view your sound levels if you are using sound. A camera appears during each screen capture to help you know when the system is capturing a screen shot, and a context indicator appears to show the recognition. With automatic recording, you can let the system capture the necessary screen shots. It may provide a more natural recording experience, and is probably easier for the untrained developer. On the other hand, you have a bit more control with manual recording on which screen shot appears, but it also means you have to remember to capture the screen shot. :) We'd be interested in hearing which type of recording you do, and any rationale on why you made that choice. Please comment and let us know. --Maria Cozzolino, Manager of UPK Software Requirements and UI Design

    Read the article

  • "Untrusted packages could compromise your system's security." appears while trying to install anything

    - by maria
    Hi I've freshly installed Ubuntu 10.4 on a new computer. I'm trying to install on it application I need (my old computer is broken and I have to send it to the service). I've managed to install texlive and than I can't install anything else. All software I want to have is what I have succesfuly installed on my old computer (with the same version of Ubuntu), so I don't understand, why terminal says (sorry, the terminal talks half English, half Polish, but I hope it's enough): maria@marysia-ubuntu:~$ sudo aptitude install emacs Czytanie list pakietów... Gotowe Budowanie drzewa zaleznosci Odczyt informacji o stanie... Gotowe Reading extended state information Initializing package states... Gotowe The following NEW packages will be installed: emacs emacs23{a} emacs23-bin-common{a} emacs23-common{a} emacsen-common{a} 0 packages upgraded, 5 newly installed, 0 to remove and 0 not upgraded. Need to get 23,9MB of archives. After unpacking 73,8MB will be used. Do you want to continue? [Y/n/?] Y WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. emacs emacs23-bin-common emacsen-common emacs23-common emacs23 Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No" I was trying to install other editors as well, with the same result. As I decided that I might be sure that I know the package I want to install is secure, finaly I've entered "Yes". The installation ended succesfuly, but editor don't understand any .tex file (.tex files are for sure fine): this is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./Szarfi.tex ! Undefined control sequence. l.2 \documentclass {book} ? What's more, I've realised that in Synaptic Manager there is no package which would be marked as supported by Canonical... Any tips? Thanks in advance

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • Lies, damned lies, and statistics Part 2

    - by Maria Colgan
    There was huge interest in our OOW session last year on Managing Optimizer Statistics. It seems statistics and the maintenance of them continues to baffle people. In order to help dispel the mysteries surround statistics management we have created a two part white paper series on Optimizer statistics.  Part one of this series was released in November last years and describes in detail, with worked examples, the different concepts of Optimizer statistics. Today we have published part two of the series, which focuses on the best practices for gathering statistics, and examines specific use cases including, the fears that surround histograms and statistics management of volatile tables like Global Temporary Tables. Here is a quick look at the Introduction and the start of the paper. You can find the full paper here. Happy Reading! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif";} Introduction The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the Optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run. This necessary information is commonly referred to as Optimizer statistics. Understanding and managing Optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance. This whitepaper is the second of a two part series on Optimizer statistics. The first part of this series, Understanding Optimizer Statistics, focuses on the concepts of statistics and will be referenced several times in this paper as a source of additional information. This paper will discuss in detail, when and how to gather statistics for the most common scenarios seen in an Oracle Database. The topics are · How to gather statistics · When to gather statistics · Improving the efficiency of gathering statistics · When not to gather statistics · Gathering other types of statistics How to gather statistics The preferred method for gathering statistics in Oracle is to use the supplied automatic statistics-gathering job. Automatic statistics gathering job The job collects statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object bases. You can find the full paper here. Happy Reading! +Maria Colgan

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • Ubuntu upgrade process failed

    - by Spin0us
    I tried to dist-upgrade my ubuntu server on my percona cluster but it failed with this message The following packages have unmet dependencies: libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33a+maria-1~precise) but it is not installable And here is the package listing # dpkg --list | grep -E 'percona|mysql' ii libdbd-mysql-perl 4.020-1build2 Perl5 database interface to the MySQL database iU libmysqlclient18 5.5.33a+maria-1~precise Virtual package to satisfy external depends ii mariadb-common 5.5.33a+maria-1~precise MariaDB database common files (e.g. /etc/mysql/conf.d/mariadb.cnf) ii percona-xtrabackup 2.1.5-680-1.precise Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-client-5.5 5.5.31-23.7.5-438.precise Percona Server database client binaries ii percona-xtradb-cluster-common-5.5 5.5.33-23.7.6-496.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-2.x 157.precise Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.5 5.5.31-23.7.5-438.precise Percona Server database server binaries ii php5-mysql 5.3.10-1ubuntu3.8 MySQL module for php5 During the install of the server, mariadb and galera cluster have first been installed. Then removed to be replaced by percona XtraDBCluster. So i think this is the source of the problem. But how can i resolve this without reinstalling all ? UPDATE 1 # apt-cache policy libmariadbclient18 libmariadbclient18: Installed: (none) Candidate: (none) Version table: 5.5.32+maria-1~precise 0 100 /var/lib/dpkg/status

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Customizing UPK outputs (Part 2 - Player)

    - by [email protected]
    There are a few things that can be done to give the Player output a personalized look to match your corporate branding. In my previous post, I talked about changing the logo. In addition to the logo, you can change the graphic in the heading, button colors, border colors and many other items. Prior to making any customizations, I strongly recommend making a copy of the existing Player style. This will give you a backup in case things go wrong. I'd also recommend that you create your own brand. This way, when you install the newest updates from us, your brand will remain intact. Creating your own brand is pretty easy. Make sure you have modify permissions on the publishing styles directory, if you are using a multi-user installation. Under the Publishing/Styles folder, create a new folder with your company name. Copy all the publishing styles from the UPK folder to your newly created folder. Now, when you go through the Publishing wizard, you will have two categories to choose from: the UPK category or your custom category. Now, for updating the Player output. First, the graphic that appears on the right hand side of the Player. If you're using a multi-user installation, check out the player style from your custom brand. Open the player style. Open the img folder. The file named "banner_image.png" represents the graphic that appears on the right hand side of the player. It is currently sized at 425 x 54. Try to keep your graphic about the same size. Rename your graphic file to be "banner_image.png", and drag it into the img folder. Save the package. Check in the package if you are in a multi-user installation. You've just updated the banner heading! Next, let's work on updating some of the other colors in the player. All the customizable areas are located in the skin.css file which is in the root of the Player style. Many of our customers update the colors to match their own theme. You don't have to be a programmer to make these changes, honest. :) To change the colors in the player: Make a copy of the original skin.css file. (This is to make sure you have a working version to revert to, in case something goes wrong.) Open the skin.css file from the Player package. You can edit it using Notepad. Make the desired changes. Save the file. Save the package. Publish to view your new changes. When you open the skin.css, you will see groupings like this: .headerDivbar { height: 21px; background-color: #CDE2FD; } Change the value of the background-color to the color of your choice. Note that you cannot use "red" as a color, but rather you should enter the hexadecimal color code. If you don't know the color code, search the web for "hexadecimal colors" and you'll find many sites to provide the information. Here are a few of the variables that you can update. Heading: .headerDivbar -this changes the color of the banner that appears under the graphic Button colors: .navCellOn - changes the color of the mode buttons when your mouse is hovering on them. .navCellOff - changes the color of the mode buttons when the mouse is not over them Lines: .thorizontal - this is the color of the horizontal lines surrounding the outline .tvertical - this is the color of the vertical lines on the left and right margin in the outline. .tsep - this is the color of the line that separates the outline from the content area Search frame: .tocSearchColor - this is the color of the search area .tocFrameText - this is the background color of the TOC tree. Hint: If you want to try out the changes prior to updating the style, you can update the skin.css in some content you've already published for the player (it's located in the css folder of the player package). This way, you can immediately see the changes without going through publishing. Once you're happy with the changes, update the skin.css in player style. Want to customize more? Refer to the "Customizing the Player" section of the Content Development manual for more details on all the options in the skin.css that can be changed, and pictures of what each variable controls. I'd love to see how you've customized the player for your corporate needs. Also, if there are other areas of the player you'd like to modify but have not been able to, let us know. Feel free to share your thoughts in the comments. --Maria Cozzolino, Manager of Requirements & UI Design for UPK

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Touchpad not working after login in Ubuntu

    - by Maria Mateescu
    At some point my touchpad stopped working on Lenovo x220 under Ubuntu 11.10, after login. I have found two possible solutions for that online, but neither of them work. First, gconftool-2 --set --type boolean /desktop/gnome/peripherals/touchpad/touchpad_enabled true and a second one, xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 8 0 After looking more carefully into xinput I have realized that xinput list-props "SynPS/2 Synaptics TouchPad" outputs: Device Enabled (132): 0 This field seems to be stuck to zero, because trying to set it back to 1 by: xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Device Enabled" 8 1 doesn't seem to have any effect, e.g. I still have: Device Enabled (132): 0 Any ideas? Thank you!

    Read the article

  • Basics of Join Predicate Pushdown in Oracle

    - by Maria Colgan
    Happy New Year to all of our readers! We hope you all had a great holiday season. We start the new year by continuing our series on Optimizer transformations. This time it is the turn of Predicate Pushdown. I would like to thank Rafi Ahmed for the content of this blog.Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins. Introduction The join predicate pushdown (JPPD) transformation allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. In the join predicate pushdown transformation, the view remains a separate query block, but it contains the join predicate, which is pushed down from its containing query block into the view. The view thus becomes correlated and must be evaluated for each row of the outer query block. These pushed-down join predicates, once inside the view, open up new index access paths on the base tables inside the view; this allows the view to be joined with index-based nested-loop join method, thereby enabling the optimizer to select an efficient execution plan. The join predicate pushdown transformation is not always optimal. The join predicate pushed-down view becomes correlated and it must be evaluated for each outer row; if there is a large number of outer rows, the cost of evaluating the view multiple times may make the nested-loop join suboptimal, and therefore joining the view with hash or sort-merge join method may be more efficient. The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework. The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported. UNION ALL/UNION view Outer-joined view Anti-joined view Semi-joined view DISTINCT view GROUP-BY view Examples Consider query A, which has an outer-joined view V. The view cannot be merged, as it contains two tables, and the join between these two tables must be performed before the join between the view and the outer table T4. A: SELECT T4.unique1, V.unique3 FROM T_4K T4,            (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3) VWHERE T4.unique3 = V.hundred(+) AND       T4.ten = V.ten(+) AND       T4.thousand = 5; The following shows the non-default plan for query A generated by disabling join predicate pushdown. When query A undergoes join predicate pushdown, it yields query B. Note that query B is expressed in a non-standard SQL and shows an internal representation of the query. B: SELECT T4.unique1, V.unique3 FROM T_4K T4,           (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3             AND T4.unique3 = V.hundred(+)             AND T4.ten = V.ten(+)) V WHERE T4.thousand = 5; The execution plan for query B is shown below. In the execution plan BX, note the keyword 'VIEW PUSHED PREDICATE' indicates that the view has undergone the join predicate pushdown transformation. The join predicates (shown here in red) have been moved into the view V; these join predicates open up index access paths thereby enabling index-based nested-loop join of the view. With join predicate pushdown, the cost of query A has come down from 62 to 32.  As mentioned earlier, the join predicate pushdown transformation is cost-based, and a join predicate pushed-down plan is selected only when it reduces the overall cost. Consider another example of a query C, which contains a view with the UNION ALL set operator.C: SELECT R.unique1, V.unique3 FROM T_5K R,            (SELECT T1.unique3, T2.unique1+T1.unique1             FROM T_5K T1, T_10K T2             WHERE T1.unique1 = T2.unique1             UNION ALL             SELECT T1.unique3, T2.unique2             FROM G_4K T1, T_10K T2             WHERE T1.unique1 = T2.unique1) V WHERE R.unique3 = V.unique3 and R.thousand < 1; The execution plan of query C is shown below. In the above, 'VIEW UNION ALL PUSHED PREDICATE' indicates that the UNION ALL view has undergone the join predicate pushdown transformation. As can be seen, here the join predicate has been replicated and pushed inside every branch of the UNION ALL view. The join predicates (shown here in red) open up index access paths thereby enabling index-based nested loop join of the view. Consider query D as an example of join predicate pushdown into a distinct view. We have the following cardinalities of the tables involved in query D: Sales (1,016,271), Customers (50,000), and Costs (787,766).  D: SELECT C.cust_last_name, C.cust_city FROM customers C,            (SELECT DISTINCT S.cust_id             FROM sales S, costs CT             WHERE S.prod_id = CT.prod_id and CT.unit_price > 70) V WHERE C.cust_state_province = 'CA' and C.cust_id = V.cust_id; The execution plan of query D is shown below. As shown in XD, when query D undergoes join predicate pushdown transformation, the expensive DISTINCT operator is removed and the join is converted into a semi-join; this is possible, since all the SELECT list items of the view participate in an equi-join with the outer tables. Under similar conditions, when a group-by view undergoes join predicate pushdown transformation, the expensive group-by operator can also be removed. With the join predicate pushdown transformation, the elapsed time of query D came down from 63 seconds to 5 seconds. Since distinct and group-by views are mergeable views, the cost-based transformation framework also compares the cost of merging the view with that of join predicate pushdown in selecting the most optimal execution plan. Summary We have tried to illustrate the basic ideas behind join predicate pushdown on different types of views by showing example queries that are quite simple. Oracle can handle far more complex queries and other types of views not shown here in the examples. Again many thanks to Rafi Ahmed for the content of this blog post.

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Maria Forney
    Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • What will Larry Ellison’s first tweet be about?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle CEO Larry Ellison will send his first tweet Wednesday, June 6. He will announce Oracle’s plans for new cloud-based software products and computing services. Follow @LarryEllison and find out http://twitter.com/larryellison

    Read the article

  • Not so long ago in a city not so far away by Carlos Martin

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This is the story of how the EMEA Presales Center turned an Oracle intern into a trusted technology advisor for both Oracle’s Sales and customers. It was the summer of 2011 when I was finishing my Computer Engineering studies as well as my internship at Oracle when I was offered what could possibly be THE dream job for any young European Computer Engineer. Apart from that, it also seemed like the role was particularly tailored to me as I could leverage almost everything I learned at University and during the internship. And all of it in one of the best cities to live in, not only from my home country but arguably from Europe: Malaga! A day at EPC As part of the EPC Technology pillar, and later on completely focused on WebCenter, there was no way to describe a normal day on the job as each day had something unique. Some days I was researching documentation in order to elaborate accurate answers for a customer’s question within a Request for Information or Proposal (RFI/RFP), other days I was doing heavy programming in order to bring a Proof of Concept (PoC) for a customer to life and last not but least, some days I presented to the customer via webconference the demo I built for them the past weeks. So as you can see, the role has research, development and presentation, could you ask for more? Well, don’t worry because there IS more! Internationality As the organization’s name suggests, EMEA Presales Center, it is the Center of Presales within Europe, Middle East and Africa so I got the chance to work with great professionals from all this regions, expanding my network and learning things from one country to apply them to others. In addition to that, the teams based in the Malaga office are comprised of many young professionals hailing mainly from Western and Central European countries (although there are a couple of exceptions!) with very different backgrounds and personalities which guaranteed many laughs and stories during lunch or coffee breaks (or even while working on projects!). Furthermore, having EPC offices in Bucharest and Bangalore and thanks to today’s tele-presence technologies, I was working every day with people from India or Romania as if they were sitting right next to me and the bonding with them got stronger day by day. Career development Apart from the research and self-study I’ve earlier mentioned, one of the EPC’s Key Performance Indicators (KPI) is that 15% of your time is spent on training so you get lots and lots of trainings in order to develop both your technical product knowledge and your presentation, negotiation and other soft skills. Sometimes the training is via webcast, sometimes the trainer comes to the office and sometimes, the best times, you get to travel abroad in order to attend a training, which also helps you to further develop your network by meeting face to face with many people you only know from some email or instant messaging interaction. And as the months go by, your skills improving at a very fast pace, your relevance increasing with each new project you successfully deliver, it’s only a matter of time (and a bit of self-promoting!) that you get the attention of the manager of a more senior team and are offered the opportunity to take a new step in your professional career. For me it took 2 years to move to my current position, Technology Sales Consultant at the Oracle Direct organization. During those 2 years I had built a good relationship with the Oracle Direct Spanish sales and sales managers, who are also based in the Malaga office. I supported their former Sales Consultant in a couple of presentations and demos and were very happy with my overall performance and attitude so even before the position got eventually vacant, I got a heads-up from then in advance that their current Sales Consultant was going to move to a different position. To me it felt like a natural step, same as when I joined EPC, I had at least a 50% of the “homework” already done but wanted to experience that extra 50% to add new product and soft skills to my arsenal. The rest is history, I’ve been in the role for more than half a year as I’m writing this, achieved already some important wins, gained a lot of trust and confidence in front of customers and broadened my view of Oracle’s Fusion Middleware portfolio. I look back at the 2 years I spent in EPC and think: “boy, I’d recommend that experience to absolutely anyone with the slightest interest in IT, there are so many different things you can do as there are different kind of roles you can end up taking thanks to the experience gained at EPC” /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • Spotlight on a career path: Paul, Business Development Consultant

    - by Maria Sandu
    I came to work for Oracle in November 2012 as a Customer Intelligence Representative and since then I was promoted to a Business Development Consultant, for Commercial Industries in the UK, based in Dublin. My background was primarily in Logistics, working for such companies as Indaver Ireland, Wincanton and P&O. I spent 10 years working in this industry and gained experience in negotiating with customers and suppliers in order to meet the needs of both, monitoring the quality and quantity of goods as well as the efficiency and organisation of the movement and storage of products. I decided to move from my logistics career in 2009 to study Information Technology in D.I.T. This was a challenge for me to move my career path; however the lectures at the college helped me significantly with the ability to understand how IT can have an effect on how businesses operate. Following on from college I came to work for Oracle. This also presented challenges but the training I received and the encouragement from management helped me understand that the same business rules apply no matter what background you come from. I have also learnt that using my past experience in working with customers and suppliers in Logistics has helped me understand how to meet customer’s needs. Oracle has offered me excellent training such as Sandler Sales Techniques and John Costigan. I continue to get all the training that I need to develop my career. If you’re interested in joining the Business Development Group visit http://bit.ly/oracledirectcareers or follow our CareersatOracle Facebook Community! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How do I deal with a third party application that has embedded hints that result in a sub-optimal execution plan in my environment?

    - by Maria Colgan
    I have gotten many variations on this question recently as folks begin to upgrade to Oracle Database 11g and there have been several posts on this blog and on others describing how to use SQL Plan Management (SPM) so that a non-hinted SQL statement can use a plan generated with hints. But what if the hint is supplied in the third party application and is causing performance regressions on your system? You can actually use a very similar technique to the ones shown before but this time capture the un-hinted plan and have the hinted SQL statement use that plan instead. Below is an example that demonstrates the necessary steps. 1. We will begin by running the hinted statement 2. After examining the execution plan we can see it is suboptimal because of a bad join order. 3. In order to use SPM to correct the problem we must create a SQL plan baseline for the statement. In order to create a baseline we will need the SQL_ID for the hinted statement. Easy place to get it is in V$SQL. 4. A SQL plan baseline can be created using a SQL_ID and DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE. This will capture the existing plan for this SQL_ID from the shared pool and store in the SQL plan baseline. 5. We can check the SQL plan baseline got created successfully by querying DBA_SQL_PLAN_BASELINES. 6. When you manually create a SQL plan baseline the first plan added is automatically accepted and enabled. We know that the hinted plan is poorly performing plan so we will disable it using DBMS_SPM.ALTER_SQL_PLAN_BASELINE. Disabling the plan tells the optimizer that this plan not a good plan, however since there is no alternative plan at this point the optimizer will still continue to use this plan until we provide a better one. 7. Now let's run the statement without the hint. 8. Looking at the execution plan we can see that the join order is different. The plan without the hint also has a lower cost (3X lower), which indicates it should perform better. 9. In order to map the un-hinted plan to the hinted SQL statement we need to add the plan to the SQL plan baseline for the hinted statement. We can do this using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE but we will need the SQL_ID and  PLAN_HASH_VALUE for the non-hinted statement, which we can find in V$SQL. 10. Now we can add the non-hinted plan to the SQL plan baseline of the hinted SQL statement using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE. This time we need to pass a few more arguments. We will use the SQL_ID and PLAN_HASH_VALUE of the non-hinted statement but the SQL_HANDLE of the hinted statement. 11. The SQL plan baseline for our statement now has two plans. But only the newly added plan (SQL_PLAN_gbpcg3f67pc788a6d8911) is enabled and accepted. This tells the Optimizer that this is the plan it should use for this statement. We can confirm that the correct plan (non-hinted) will be selected for the statement from now on by re-executing the hinted statement and checking its execution plan.

    Read the article

  • Meet Thomas, the Most Innovational person in Oracle Direct EMEA of Q1

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Thomas was voted, by his peers,  the most Innovational person in Oracle Direct EMEA of Q1, the first quarter of this fiscal! Thomas, a Business Development Consultant at Oracle Direct’s Applications Team, taught himself how to use and leverage the power of social engagement consistent with Oracle’s Social Media Policy.  From these learning's he provided both his and other applications teams in Dublin with huge amounts of training and has presented his findings to the teams on more than one occasion. It is important to recognise that this isn't just a great idea....it actually works! The results speak for themselves. Thomas is engaging with customers and prospects via their preferred channel of communication and creating a strong personal social brand. We congratulate Thomas for his efforts of raising Social Media to the next level within Business Development Group. He put a lot of work into Social Selling, as one of the first within the BDG and set the example for a new innovative approach on how to sell anno 2013. He deserves to be recognized for this. His contribution to social media has been a great inspiration for all Business Development Consultants or Business Relationship Consultants. He knows what he talks about and has great conversion rates out of his social media campaigns. And he doesn't mind sharing his knowledge with everybody. Great effort in searching for new ways of communication and social selling. Thomas has shown great initiative towards leveraging the social media and networks (twitter, linkedin) to find new business opportunities in a previously way. He has shown great out-of-the-box thinking while addressing new companies and prospects and has shared those experiences and ideas to help his colleagues use the same approach. This included a presentation, informational emails and a general helpful attitude from him. He also shared his success stories from his innovational approach.  Thomas is showing initiative with an innovative and fresh character, truly helping people to try something new  with a focus on selling across channels and working for the CRM team which is focused on selling social. We think the way Thomas positions social, by using social is innovative and inspirational. What better way to tell your clients do social, by engaging with them on a social platform? Going always the extra mile, we believe, that Thomas Brits, is an innovator from the day he walked into Oracle Direct. The way Thomas operates on the work floor by introducing new ideas to find the best opportunities as possible shows he runs the extra mile for coming up with new ideas around how to engage with customers more efficiently for instance via Social Media. Thomas also organises power hours/days for the team. He is the best! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Meet our Interns: Adam and Hanadi

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This week, we’d like to introduce you to two of our ECEMEA Interns, Adam and Hanadi. They’re based in different countries and are part of different teams; however they both have the same enthusiasm in being an Intern at Oracle. “Hi! I’m Adam (Bachelor of Accounting Science & CIMA Diploma in Management Accounting), a member of the Oracle Applications Pre-sales team in Johannesburg, South Africa. Joining Oracle has been a truly inspiring experience thus far. My first week at Oracle has been one of insight and learning. I have had the opportunity to meet and interact with industry leading software solution professionals. Gaining insight into a mammoth multinational company has changed my perception on how things work and has truly opened my eyes to the world of business. Having the privilege of joining the Oracle Graduate Program has afforded me the chance to take advantage of countless training opportunities as well as the chance to learn about Information Technology in a practical manner which is vital to most businesses in today’s modern environment.” “Hi! I’m Hanadi, an Oracle 2013 Sales Intern from Saudi Arabia. I received my BSc in Information Technology from King Saud University and immediately after graduating I applied for the internship at Oracle. I thought it was an incredible opportunity and a great way to shift from college life to career life through learning and practicing in an environment with such high standards. At the beginning, I was a bit nervous in joining the serious business world, but once I joined, I found the program very organized and everyone was extremely helpful, which made it easier for us, as interns, to learn faster. If you are a self-motivated, committed person, who has initiative, accepts challenges, has good soft skills and some technical experience, I would definitely advice you to take a chance and apply for the program once you graduate. Best of luck!” Get the latest updates from the ECEMEA Sales and Presales Internship Programme 2013 by following #Oracleinterns on Twitter or visiting CampusatOracle Facebook Page! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >