Search Results

Search found 24806 results on 993 pages for 'geographic information sy'.

Page 71/993 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to send e-mail to customers using quick campaing?

    - by ripperus
    I have a question/problem. I want sent email to my clients using quick campaing. But in those emails I want to put some information for only this customer. For example: First customer - A - have a information: login: only_for_A and password: only_for_A Second customer - B - have a information: login: only_for_B and password: only_for_B etc. But the passwords are in Excel file. Is there any possibility to automatically add login and password to email?

    Read the article

  • How to get the level and position of the player from an extern program? [on hold]

    - by user3727174
    I want to write a program that needs the current level and position of the player (primary single player). This should work for potentially every game installed and running on the computer my program is running on. The data I need is basically one integer value for the level (if there are any) and three integer values for x, y and optimal z for the position of the player. In which relation/scale or where the null point is does not matter, because this information is going to be interpreted game dependent, I will use this information to read information out of a database created for the game currently running. Currently I'm using C++, but if there is a better option for Java I´m willing to port my program. My thoughs so far are: make a mod for every game that should be supported, get the position/level from there, write this information to the disk and read it from my program tracking mouse/keyboard events and reconstructing the movement won't work Are there any general APIs for something like this? Any Tool to find this data? Or maybe engines that provide APIs to get this data directly from the game?

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • ?12c database ????Adaptive Execution Plans ????????

    - by Liu Maclean(???)
    12c R1 ????SQL??????- Adaptive Execution Plans ????????,???????optimizer ??????(runtime)???????????????, ????????????????????? SQL???????? ????????????, ?????????????????????????????????????????????????????????????adaptive plan ????????????????????????????????????,?????subplan???????????????????? ??????, ???????? ???????????????,?????????, ?????? ???????????????”???”????, ???????????????????buffer ???????  ????????????,?????,??????????????????? ???optimizer ?????????????????????????,?????????????????????????????????????????plan???? ??12C?????????????, ???????????????????,?????? ???????????? ????????????2???: Dynamic Plans????: ???????????????????????;??????,???optimizer??????????subplans??????????????, ???????????????????,?????????????? Reoptimization????: ?Dynamic Plans????,Reoptimization??????????????????????Reoptimization??,?????????????????????????,??reoptimization????? OPTIMIZER_ADAPTIVE_REPORTING_ONLY ???? report-only????????????????TRUE,?????????report-only????,???????????????,??????????????? Dynamic Plans ??????????????,????????????????????????, ?????????????,???????????,????????????????????????????????????????? ?????????????final plan??????????????default plan, ??final plan?default plan???????,????????????? subplan ???????????????,???????????????????????? ??????,???????statistics collector ?buffer???????????statistics collector?????????????????,???????????????????????????? ?????????????????????????????????????????,??????????,?????????????? ???????????,???????buffer???? ???????????????,?????????????????????????????,??????buffer,??????final plan? ????????,???????????????????????,????????????????? ?V$SQL??????IS_RESOLVED_DYNAMIC_PLAN??????????final plan???default plan? ??????dynamic plan ???????SQL PLAN directives?????? declare cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES; begin for z in PLAN_DIRECTIVE_IDS loop DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id); end loop; end; / explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; select * from table(dbms_xplan.display()); Plan hash value: 1255158658 www.askmaclean.com ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4 | 128 | 7 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 4 | 128 | 7 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION | 1 | 20 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1) 4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") alter session set events '10053 trace name context forever,level 1'; OR alter session set events 'trace[SQL_Plan_Directive] disk highest'; select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; ---------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ---------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | 1 | HASH JOIN | | 4 | 128 | 7 | 00:00:01 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 4 | 128 | 7 | 00:00:01 | | 4 | STATISTICS COLLECTOR | | | | | | | 5 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 | 00:00:01 | | 6 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK| 1 | | 0 | | | 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | | 8 | TABLE ACCESS FULL | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | ---------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") 5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1)) 6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") ===================================== SPD: BEGIN context at statement level ===================================== Stmt: ******* UNPARSED QUERY IS ******* SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID" Objects referenced in the statement PRODUCT_INFORMATION[P] 92194, type = 1 ORDER_ITEMS[O] 92197, type = 1 Objects in the hash table Hash table Object 92197, type = 1, ownerid = 6573730143572393221: No Dynamic Sampling Directives for the object Hash table Object 92194, type = 1, ownerid = 17822962561575639002: No Dynamic Sampling Directives for the object Return code in qosdInitDirCtx: ENBLD =================================== SPD: END context at statement level =================================== ======================================= SPD: BEGIN context at query block level ======================================= Query Block SEL$1 (#0) Return code in qosdSetupDirCtx4QB: NOCTX ===================================== SPD: END context at query block level ===================================== SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267 SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267 SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300 SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149 SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051 SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO select * from table(dbms_xplan.display_cursor(format=>'report')) ; ????report????adaptive plan Adaptive plan: ------------- This cursor has an adaptive plan, but adaptive plans are enabled for reporting mode only.  The plan that would be executed if adaptive plans were enabled is displayed below. ------------------------------------------------------------------------------------------ | Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------ |   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          | |*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 | |*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 | |   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------ SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%'; SQL_ID IS -------------------------- -- SQL_TEXT -------------------------------------------------------------------------------- 6ydj1bn1bng17 Y select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id ???? explain plan for ????default plan, ??????optimizer???final plan,??V$SQL.IS_RESOLVED_DYNAMIC_PLAN???Y,????????????? DBA_SQL_PLAN_DIRECTIVES?????????????SQL PLAN DIRECTIVES, ???12c? ???MMON?????DML ???column usage??????????,????SMON??? MMON????SGA??PLAN DIRECTIVES??? ?????DBMS_SPD.flush_sql_plan_directive???? select directive_id,type,reason from DBA_SQL_PLAN_DIRECTIVES / DIRECTIVE_ID TYPE REASON ----------------------------------- -------------------------------- ----------------------------- 10321283028317893030 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 4757086536465754886 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 16085268038103121260 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE SQL> set pages 9999 SQL> set lines 300 SQL> col state format a5 SQL> col subobject_name format a11 SQL> col col_name format a11 SQL> col object_name format a13 SQL> select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason 2 from dba_sql_plan_directives d, dba_sql_plan_dir_objects o 3 where d.DIRECTIVE_ID=o.DIRECTIVE_ID 4 and o.object_name in ('ORDER_ITEMS') 5 order by d.directive_id; DIRECTIVE_ID OBJECT_TYPE OBJECT_NAME COL_NAME TYPE STATE REASON ------------ ------------ ------------- ----------- -------------------------------- ----- ------------------------------------- --- 1.8156E+19 COLUMN ORDER_ITEMS UNIT_PRICE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 TABLE ORDER_ITEMS DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 COLUMN ORDER_ITEMS QUANTITY DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE DBA_SQL_PLAN_DIRECTIVES????? _BASE_OPT_DIRECTIVE ? _BASE_OPT_FINDING SELECT d.dir_own#, d.dir_id, d.f_id, decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'), decode(state, 1, 'NEW', 2, 'MISSING_STATS', 3, 'HAS_STATS', 4, 'CANDIDATE', 5, 'PERMANENT', 6, 'DISABLED', 'UNKNOWN'), decode(bitand(flags, 1), 1, 'YES', 'NO'), cast(d.created as timestamp), cast(d.last_modified as timestamp), -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details -- about 6.5 cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day') FROM sys.opt_directive$ d ??dbms_spd??? SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES???retention ???53?: Package: DBMS_SPD This package provides subprograms for managing Sql Plan Directives(SPD). SPD are objects generated automatically by Oracle server. For example, if server detects that the single table cardinality estimated by optimizer is off from the actual number of rows returned when accessing the table, it will automatically create a directive to do dynamic sampling for the table. When any Sql statement referencing the table is compiled, optimizer will perform dynamic sampling for the table to get more accurate estimate. Notes: DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of this package. Also the subprograms commit the current transaction (if any), perform the operation and commit it again. DBA view dba_sql_plan_directives shows all the directives created in the system and the view dba_sql_plan_dir_objects displays the objects that are included in the directives. -- Default value for SPD_RETENTION_WEEKS SPD_RETENTION_WEEKS_DEFAULT CONSTANT varchar2(4) := '53'; | STATE : NEW : Newly created directive. | : MISSING_STATS : The directive objects do not | have relevant stats. | : HAS_STATS : The objects have stats. | : PERMANENT : A permanent directive. Server | evaluated effectiveness and these | directives are useful. | | AUTO_DROP : YES : Directive will be dropped | automatically if not | used for SPD_RETENTION_WEEKS. | This is the default behavior. | NO : Directive will not be dropped | automatically. Procedure: flush_sql_plan_directive This procedure allows manually flushing the Sql Plan directives that are automatically recorded in SGA memory while executing sql statements. The information recorded in SGA are periodically flushed by oracle background processes. This procedure just provides a way to flush the information manually. ????”_optimizer_dynamic_plans”(enable dynamic plans)????????,???TRUE??DYNAMIC PLAN? ???FALSE???????????? ????,Dynamic Plan????????????Nested Loop?Hash Join???case ,????????Nested loop???????????HASH JOIN,?HASH JOIN????????????????? ????????subplan?????,???? pass?? ?join method???,?????STATISTICS COLLECTOR???cardinality?,???????HASH JOIN?????Nested Loop,????????????subplan?????access path; ???????Sales??????????????????,????HASH JOIN,??SUBPLAN??customers?????????;?????Nested Loop,???????cust_id?????Range Scan+Access by Rowid? Cardinality feedback Cardinality feedback????????11.2????,????????re-optimization???;  ???????????,Cardinality feedback?????????????????????????? ???????????????????,?????????????????,??????????Cardinality feedback????????????? ????????????????????????? ??????????????Cardinality feedback ??: ????????,???????????,??????????,????????????????selectivity ??? ????????????: ??????,?????????????????????????????????,??????????????????? ????????????????????????????????????????,?????????????????????????? ?????????,???????????????,?????????? ??????????Cardinality ????,??????join Cardinality ????????? Cardinality feedback???????cursor?,?Cursor???aged out????? SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | 20 | | | | |* 1 | HASH JOIN | | 1 | 4 | 13 |00:00:00.01 | 24 | 20 | 2061K| 2061K| 429K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 4 | 13 |00:00:00.01 | 7 | 6 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.01 | 17 | 14 | | | | ---------------------------------------------------------------------------------------------------------------------------------------- SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | | | | |* 1 | HASH JOIN | | 1 | 13 | 13 |00:00:00.01 | 24 | 2061K| 2061K| 413K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 13 | 13 |00:00:00.01 | 7 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 288 | 288 |00:00:00.01 | 17 | | | | ------------------------------------------------------------------------------------------------------------------------------- Note ----- - statistics feedback used for this statement SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y'; COUNT(*) ---------- 2 SQL>select sql_ID,USE_FEEDBACK_STATS FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y'; SQL_ID U ------------- - cz0hg2zkvd10y Y ????????Cardinality feedback????,???????????????????????????,????????????order_items???????? ????2??????plan hash value??(??????????),?????2????child cursor??????gather_plan_statistics???actual : A-ROWS  estimate :E-ROWS????????? Automatic Re-optimization ???dynamic plan, Re-optimization???????????????  ?  ??????????????? ????????????????????????????????  ???????????,??????????????, ???????????????????? ???????????  Re-optimization??, ????????????????????? Re-optimization????dynamic plan??????????  dynamic plan????????????????????, ???????????????????? ????,??????????join order ??????????????,?????????????join order????? ??????,????????Re-optimization, ??Re-optimization ??????????????????? ?Oracle database 12c?,join statistics?????????????????????,??????????????????????Re-optimization???????????adaptive cursor sharing????? ????????????????,???????????? ????? ???????statistics collectors ????????????????????Re-optimization??????2?????????????,???????????????? ??????????????Re-optimization?????,?????????????????????? ???v$SQL??????IS_REOPTIMIZABLE?????????????????????Re-optimization,??????????Re-optimization???,?????Re-optimization ,???????reporting????? IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:   Y: If the next execution will trigger a reoptimization R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode N: If the child cursor has no reoptimization information ??1: select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST')); SQL_ID  gwf99gfnm0t7g, child number 0 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 1906736282 ------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          | |*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          | |   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)| |   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          | |*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          | ------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID") SQL_ID  gwf99gfnm0t7g, child number 1 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 35479787 -------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          | |   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          | |*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)| |*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          | |   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          | |*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          | -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    5 - access("O"."ORDER_ID"="ORDER_ID") Note -----    - statistics feedback used for this statement    SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g'; IS CHILD_NUMBER -- ------------ Y             0 N             1    1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul SQL> / CHILD_NUMBER OTHER_XML ------------ --------------------------------------------------------------------------------            1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1              2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_              hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi              nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR              ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h              int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></              hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>              </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7              4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"              ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I              NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[              LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C              DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$              F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>            0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C              DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash              _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>              </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD              ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD              ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi              nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>              </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[              INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA              [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD              UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1              " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"              SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o              utline_data></other_xml> ??2: SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 0 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | 14 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 8 | 29 |00:00:00.01 | 17 | 14 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON ----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 1 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 29 | 29 |00:00:00.01 | 17 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) Note ----- - cardinality feedback used for this statement SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' b74nw722wjvy3 1 select /*+g N ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' SELECT /*+gather_plan_statistics*/ CUST_EMAIL FROM CUSTOMERS WHERE CUST_STATE_PROVINCE='MA' AND COUNTRY_ID='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID 3tk6hj3nkcs2u, child number 0 ------------------------------------- Select /*+gather_plan_statistics*/ cust_email From customers Where cust_state_province='MA' And country_id='US' Plan hash value: 1683234692 ------------------------------------------------------------------------------- |Id | Operation | Name | Starts|E-Rows|A-Rows| A-Time |Buffers| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01| 16 | |*1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 2| 2 |00:00:00.01| 16 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US')) Note ----- - dynamic sampling used for this statement (level=2) - 1 Sql Plan Directive used for this statement EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OW OBJECT_NA COL_NAME OBJECT TYPE STATE REASON ------------------- -- --------- ---------- ------- --------------- ------------- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE

    Read the article

  • Mapping A Sphere To A Cube

    - by petrocket
    There is a special way of mapping a cube to a sphere described here: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html It is not your basic "normalize the point and your done" approach and gives a much more evenly spaced mapping. I've tried to do the inverse of the mapping going from sphere coords to cube coords and have been unable to come up the working equations. It's a rather complex system of equations with lots of square roots. Any math geniuses want to take a crack at it? Here's the equations in c++ code: sx = x * sqrtf(1.0f - y * y * 0.5f - z * z * 0.5f + y * y * z * z / 3.0f); sy = y * sqrtf(1.0f - z * z * 0.5f - x * x * 0.5f + z * z * x * x / 3.0f); sz = z * sqrtf(1.0f - x * x * 0.5f - y * y * 0.5f + x * x * y * y / 3.0f); sx,sy,sz are the sphere coords and x,y,z are the cube coords.

    Read the article

  • Getting SIGILL in float to fixed conversion

    - by foliveira
    I'm receiving a SIGILL after running the following code. I can't really figure what's wrong with it. The target platform is ARM, and I'm trying to port a known library (in which this code is contained) void convertFloatToFixed(float nX, float nY, unsigned int &nFixed) { short sx = (short) (nX * 32); short sy = (short) (nY * 32); unsigned short *ux = (unsigned short*) &sx; unsigned short *uy = (unsigned short*) &sy; nFixed = (*ux << 16) | *uy; } Any help on it would be greatly appreciated. Thanks in advance

    Read the article

  • Sympy python circumference

    - by Mattia Villani
    I need to display a circumference. In order to do that I thought I could calculata for a lot of x the two values of y, so I did: import sympy as sy from sympy.abc import x,y f = x**2 + y**2 - 1 a = x - 0.5 sy.solve([f,a],[x,y]) and this is what I get: Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 484, in solve solution = _solve(f, *symbols, **flags) File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 749, in _solve result = solve_poly_system(polys) File "/usr/lib/python2.7/dist-packages/sympy/solvers/polysys.py", line 40, in solve_poly_system return solve_biquadratic(f, g, opt) File "/usr/lib/python2.7/dist-packages/sympy/solvers/polysys.py", line 48, in solve_biquadratic G = groebner([f, g]) File "/usr/lib/python2.7/dist-packages/sympy/polys/polytools.py", line 5308, i n groebner raise DomainError("can't compute a Groebner basis over %s" % domain) DomainError: can't compute a Groebner basis over RR How can I calculate the y's values ?

    Read the article

  • Proper Translation of equation to C#

    - by Shykin
    I am trying to replicate this equation: Slope(b) = (NSXY - (SX)(SY)) / (NSX2 - (SX)2) in C# but I'm getting the following issue: If I make the average of X = 1 + 2 + 3 + 4 + 5 and the average Y = 5 + 4 + 3 + 2 + 1 it gives me a positive slope even though it is clearly counting down. If I place the same numbers into this calculator: http://www.easycalculation.com/statistics/regression.php It gives me a negative slope in the linked calculator with the same data. I'm trying to narrow down the reasons so is the following a proper translation from equation to C# code: Slope(b) = (NSXY - (SX)(SY)) / (NSX2 - (SX)2) to Slope (m) = ((x * avgX * avgY) - (avgX * avgY)) / ((x * Math.Pow(avgX, 2)) - Math.Pow(avgX, 2));

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Use ASP.NET 4 Browser Definitions with ASP.NET 3.5

    - by Stephen Walther
    We updated the browser definitions files included with ASP.NET 4 to include information on recent browsers and devices such as Google Chrome and the iPhone. You can use these browser definition files with earlier versions of ASP.NET such as ASP.NET 3.5. The updated browser definition files, and instructions for installing them, can be found here: http://aspnet.codeplex.com/releases/view/41420 The changes in the browser definition files can cause backwards compatibility issues when you upgrade an ASP.NET 3.5 web application to ASP.NET 4. If you encounter compatibility issues, you can install the old browser definition files in your ASP.NET 4 application. The old browser definition files are included in the download file referenced above. What’s New in the ASP.NET 4 Browser Definition Files The complete set of browsers supported by the new ASP.NET 4 browser definition files is represented by the following figure:     If you look carefully at the figure, you’ll notice that we added browser definitions for several types of recent browsers such as Internet Explorer 8, Firefox 3.5, Google Chrome, Opera 10, and Safari 4. Furthermore, notice that we now include browser definitions for several of the most popular mobile devices: BlackBerry, IPhone, IPod, and Windows Mobile (IEMobile). The mobile devices appear in the figure with a purple background color. To improve performance, we removed a whole lot of outdated browser definitions for old cell phones and mobile devices. We also cleaned up the information contained in the browser files. Here are some of the browser features that you can detect: Are you a mobile device? <%=Request.Browser.IsMobileDevice %> Are you an IPhone? <%=Request.Browser.MobileDeviceModel == "IPhone" %> What version of JavaScript do you support? <%=Request.Browser["javascriptversion"] %> What layout engine do you use? <%=Request.Browser["layoutEngine"] %>   Here’s what you would get if you displayed the value of these properties using Internet Explorer 8: Here’s what you get when you use Google Chrome: Testing Browser Settings When working with browser definition files, it is useful to have some way to test the capability information returned when you request a page with different browsers. You can use the following method to return the HttpBrowserCapabilities the corresponds to a particular user agent string and set of browser headers: public HttpBrowserCapabilities GetBrowserCapabilities(string userAgent, NameValueCollection headers) { HttpBrowserCapabilities browserCaps = new HttpBrowserCapabilities(); Hashtable hashtable = new Hashtable(180, StringComparer.OrdinalIgnoreCase); hashtable[string.Empty] = userAgent; // The actual method uses client target browserCaps.Capabilities = hashtable; var capsFactory = new System.Web.Configuration.BrowserCapabilitiesFactory(); capsFactory.ConfigureBrowserCapabilities(headers, browserCaps); capsFactory.ConfigureCustomCapabilities(headers, browserCaps); return browserCaps; } At the end of this blog entry, there is a link to download a simple Visual Studio 2008 project – named Browser Definition Test -- that uses this method to display capability information for arbitrary user agent strings. For example, if you enter the user agent string for an iPhone then you get the results in the following figure: The Browser Definition Test application enables you to submit a user-agent string and display a table of browser capabilities information. The browser definition files contain sample user-agent strings for each browser definition. I got the iPhone user-agent string from the comments in the iphone.browser file. Enumerating Browser Definitions Someone asked in the comments whether or not there is a way to enumerate all of the browser definitions. You can do this if you ware willing to use a little reflection and read a private property. The browser definition files in the config\browsers folder get parsed into a class named BrowserCapabilitesFactory. After you run the aspnet_regbrowsers tool, you can see the source for this class in the config\browser folder by opening a file named BrowserCapsFactory.cs. The BrowserCapabilitiesFactoryBase class has a protected property named BrowserElements that represents a Hashtable of all of the browser definitions. Here's how you can read this protected property and display the ID for all of the browser definitions: var propInfo = typeof(BrowserCapabilitiesFactory).GetProperty("BrowserElements", BindingFlags.NonPublic | BindingFlags.Instance); Hashtable browserDefinitions = (Hashtable)propInfo.GetValue(new BrowserCapabilitiesFactory(), null); foreach (var key in browserDefinitions.Keys) { Response.Write("" + key); } If you run this code using Visual Studio 2008 then you get the following results: You get a huge number of outdated browsers and devices. In all, 449 browser definitions are listed. If you run this code using Visual Studio 2010 then you get the following results: In the case of Visual Studio 2010, all the old browsers and devices have been removed and you get only 19 browser definitions. Conclusion The updated browser definition files included in ASP.NET 4 provide more accurate information for recent browsers and devices. If you would like to test the new browser definitions with different user-agent strings then I recommend that you download the Browser Definition Test project: Browser Definition Test Project

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Visual Studio 2010 Service Pack 1 And .NET Framework 4.0 Update

    - by Paulo Morgado
    As announced by Jason Zender in his blog post, Visual Studio 2010 Service Pack 1 is available for download for MSDN subscribers since March 8 and is available to the general public since March 10. Brian Harry provides information related to TFS and S. "Soma" Somasegar provides information on the latest Visual Studio 2010 enhancements. With this service pack for Visual Studio an update to the .NET Framework 4.0 is also released. For detailed information about these releases, please refer to the corresponding KB articles: Update for Microsoft .NET Framework 4 Description of Visual Studio 2010 Service Pack 1 Update: When I was upgrading from the Beta to the final release on Windows 7 Enterprise 64bit, the instalation hanged with Returning IDCANCEL. INSTALLMESSAGE_WARNING [Warning 1946.Property 'System.AppUserModel.ExcludeFromShowInNewInstall' for shortcut 'Manage Help Settings - ENU.lnk' could not be set.]. Canceling the installation didn’t work and I had to kill the setup.exe process. When reapplying it again, rollbacks were reported, so I reapplied it again – this time with succes.

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • PlanetQuest and the start of a new project!

    - by TATWORTH
    At the Planet Quest http://planetquest.jpl.nasa.gov/ web site there is an interesting page on the number of planets detected around other stars. There is a link to a page at http://planetquest.jpl.nasa.gov/widget.cfm for an applet to poll for this information. I downloaded the applet but had no wish to install it. Instead I viewed it in Notepad++ and found that it contacted http://planetquest.jpl.nasa.gov/atlas/xml/planetstats.cfm to get data on the latest discovery. I have amedned the CommonData project and have wrote a class in that to poll for the information. I have amended the CommonData project and have wrote a class in that to poll for the information.  That class and its unit test will form the basis for a new project.

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Sources (other than tutorials) on Game Mechanics

    - by Holland
    But, I'm not quite sure where I should start from here. I know I have to go and grab an engine to use with some prebuilt libraries, and then from there learn how to actually code a game, etc. All I have right now is some "program Tetris" tutorial for C++ open right now, but I'm not even sure if that will really help me with what I want to accomplish. I'm curious if there are is any good C++ documentation related to game development which provides information on building a game in more of a component model (by this I'm referring to the documentation, not the actual object-oriented design of the game itself), rather than an entire tutorial designed to do something specific. This could include information based on various design methodologies, or how to link hardware with OpenGL interfaces, or just simply even learning how to render 2D images on a canvas. I suppose this place is definitely a good source :P, but what I'm looking for is quite a bit of information - and I think posting a new question every ten minutes would just flood the site...

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >