Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 265/679 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • DBaaS Online Forum - Now available on-demand

    - by Javier Puerta
    The Database-as-a-Service Online Forum  was originally broadcasted on Monday, October 21, 2013, at a US-timezones time. All the content of the forum is now available on-demand for customers and partners to watch and listen to. The content is available on demand here. Watch the on-demand forum to hear from analysts and experts on how companies are beginning to transform with Database as a Service, and learn the prescriptive steps your organization can take to design, deploy, and deliver Database as a Service today   Agenda  Keynote Carl Olofson, Research VP, IDC Juan Loaiza, Senior Vice President, Oracle Systems Technology Todd Kimbriel, Director, State of Texas, eGovernment Division Eric Zonneveld, Oracle Architect, KPN James Anthony, Technology Director, e-DBA   Breakout 1: Design DBaaS Alan Levine, Senior Director, Oracle Enterprise Architects   Breakout 2: Deploy DBaaS Michael Timpanaro-Perrotta, Director of Product Management, Oracle   Breakout 3: Deliver DBaaS  Sudip Datta, Vice President of Product Management, Oracle   Closing Session Michelle Malcher, IOUG President Juan Loaiza, Senior Vice President, Oracle Systems Technology

    Read the article

  • DBaaS Online Forum - Now available on-demand

    - by Javier Puerta
    The Database-as-a-Service Online Forum  was originally broadcasted on Monday, October 21, 2013, at a US-timezones time. All the content of the forum is now available on-demand for customers and partners to watch and listen to. The content is available on demand here. Watch the on-demand forum to hear from analysts and experts on how companies are beginning to transform with Database as a Service, and learn the prescriptive steps your organization can take to design, deploy, and deliver Database as a Service today   Agenda  Keynote Carl Olofson, Research VP, IDC Juan Loaiza, Senior Vice President, Oracle Systems Technology Todd Kimbriel, Director, State of Texas, eGovernment Division Eric Zonneveld, Oracle Architect, KPN James Anthony, Technology Director, e-DBA Breakout 1: Design DBaaS Alan Levine, Senior Director, Oracle Enterprise Architects Breakout 2: Deploy DBaaS Michael Timpanaro-Perrotta, Director of Product Management, Oracle Breakout 3: Deliver DBaaS  Sudip Datta, Vice President of Product Management, Oracle Closing Session Michelle Malcher, IOUG President Juan Loaiza, Senior Vice President, Oracle Systems Technology

    Read the article

  • Mark Wilcox Discusses Privileged Account Management

    - by Naresh Persaud
    96 800x600 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:Calibri;} The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. The Oracle Privileged Account Manager solution can be deployed stand alone or in conjunction with the Oracle Governance Suite for a comprehensive solution. As part of the comprehensive platform, Privilege Account Manager is interoperable with the Identity suite. In addition, Privileged Account Manager can re-use Oracle Identity Manager connectors for propagating changes to target systems. The two are interoperable at the data level. I caught up with Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager and discussed with him the capabilities of the offering in this podcast. Click here to listen.

    Read the article

  • Partner Webinar Series CRM/CX Best Practices - Each Friday - 10am PST

    - by Richard Lefebvre
    A CRM/CX Best Practices Webinar will be led each week by the Oracle CRM/CX Sales Consulting team and focus on Demo best practices and previews Lessons Learned from Sales Cycles Competitive & product/solution positioning information Product updates& progress Replays are available from the webinar's portal. Please see the agenda and webinar details here and join us to learn about a new CX topic each Friday at 10am PT.

    Read the article

  • Identity R2 in London Oct 24th with Amit Jasuja

    - by Naresh Persaud
    Join Amit Jasuja, Senior Vice President Identity Management and Security, Oracle, and Peter Boyle, Head of Identity Services, BT in London on 24th October 2012 for the UK launch of Oracle Identity Management 11gR2. You’ll learn more about the evolution of this exceptional business solution and get the unique opportunity to network with existing Oracle customers and speak directly with Oracle product experts The agenda includes: An overview of capabilities Product demonstrations Customer presentations An interactive panel discussion Amit Jasuja will also be available for 1:1 meetings. Please email [email protected] to request a meeting with Amit. Click here to Register. 

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Oracle Number One in Supply Chain Planning

    - by Stephen Slade
    Something nice to write home about!  Saw this accomplishment and worth promoting, with special Congrats to the VCP team. Read on: Summary: Oracle is the #1 player in  Supply Chain Planning  according to research firm ARC Advisory Group Details: The report (Source: ARC Advisory Group, “Supply Chain Planning Worldwide Outlook, Market Analysis and Forecast through 2016,” Clint Reiser, Steve Banker), gives Oracle 21.1% of revenue share, compared to SAP, who was second at 18.6%. JDA Software, Aspen, Logility, and Infor were the next players in the market. The total market was valued at $1.506B. ARC counts Software (new license and upgrades), Implementation Services, Maintenance and Support, and SaaS, in its definition. ARC defines supply chain planning to include four key application areas: Extended SCP, Manufacturing Planning, Inventory/Distribution Planning, and Demand Management. Extended SCP consists of Network Design, Capable to Promise, SCP Composites, and Extended Supply Chain BI software. In the report, ARC further gives Oracle the number one spot in both Software Revenues and Services Revenues subsegments, as well as in many vertical areas such as Government, Electronics and Electrical, Medical Products, Pharmaceutical, and Wholesale/Distribution. ARC also issued a forecast, that predicts SCP revenue to grow from $1.506B in 2011 to $2.172B in 2016, with a CAGR of 7.6%. The report has several positive quotes about Oracle, including calling Oracle a “visionary,” and states that “Oracle has leveraged a broad set of home-grown and acquired offerings to create a comprehensive, integrated, yet modular suite with applicability to a wide range of industries,” Blog Link: http://blog.us.oracle.com/marketdata/?97119896  (shawn willett@oracle com)

    Read the article

  • Why does my browser take me to Scour.com? (redirect virus)

    - by Paula DiTallo
    The "scour" or Rootkit.Win32.TDSS virus has a long history which can be found here: http://en.wikipedia.org/wiki/Scour Here is the primary symptom: after searching for something in your web browser using google, one of the results that you click on redirects you to scour.com. If you've executed ClamWin, Malwarebytes, McAfee, Norton, etc. to find and isolate the virus without any luck--this isn't really a surprise, since this virus attaches to existing system drivers. I only know of one reliable package that will remove this without ill effects--like adding new spyware. This package is called TDSSKiller. I have seen multiple websites that claim to have this software available, but the one that I know is reliable is located here: http://support.kaspersky.com/viruses/solutions?qid=208280684 Once you go to Kaspersky's tech support site, the TDSSKiller zip file is available for downloading. When you execute this software, you will be able to "cure" or repair the infected driver. Remember to jot down the name of the driver for future reference--should you need to reinstall the driver from a "same-as" working computer, or your install disk if the repair is ineffective. The driver that happened to get infected on my computer was the tcpip.sys driver. This caused my win sockets to loose their ip addresses. In most other instances, less critical drivers such as HDAudBus.sys are infected. In my case, I was not through correcting my computer problems until I corrected the broken WinSock issue and loaded an earlier version of the tcpip.sys driver from: C:\WINDOWS\ServicePackFiles\i386 which I placed in: C:\WINDOWS\system32\drivers Don't forget to reboot your computer after your repair! Once you download TDSSKiller and cure/repair your infected driver(s), the redirect on google searches should disappear .

    Read the article

  • When does a Project Manager start in a project?

    - by johndoucette
    From a colleague of mine… “As a project manager, when do you typically like to get initially involved in the project? Is it better for the PM to be rolled on during the project kick-off, the first week, or is it better to roll-on the second week when things settle down?” My textbook answer is “the Project Manager is responsible for the successful completion and delivery of the expected outcome of the project through the following major tasks;” 1.    Identifying requirements 2.    Establishing clear and achievable objectives 3.    Balancing the competing demands for quality, scope, time, and cost 4.    Adapting the specifications, plans, and approach to the different concerns and expectations of the various stakeholders However; My colleague is often a lead technical consultant coming into a project alone to help a client solve a complex problem. As Magenic consultants, we all possess many of the “project managing” skills I talked about above and tend to be responsible for item #1 and #2 as well as the actual architecture/design tasks early in a project. When the real development begins and there is no PM involved, the project will quickly get harder to execute unless items #3 & #4 are assigned to a Project Manager. In software development, the concept of context switching between coding and other administrative activities is the hardest skill perfect. In my experience, I have rarely been introduced to someone who has mastered this skill. This is the limbo I was in when I was asked to become a PM -- while still developing. “Put down the code” was not only a profound statement, but looking back – a necessary one. Unless you are lucky to have found that one developer who is a superman, asking your developers (internal corporate or consultant) to perform #3 and #4 tasks, will surely take more time, allow opportunity for more scope, and eventually cost more. Project Managers are crucial to the overall success of a project, and I prefer them to start by taking ownership of delivery on day one.

    Read the article

  • The environment that is uniquely Oracle by Phillip Yi

    - by Nadiya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In the past month, I have been given the exclusive opportunity to hire a Legal graduate/intern for Oracle’s in-house Legal Counsel based here in North Ryde, Sydney. Whilst talking to various applicants, I am asked the same, broad question – what are we looking for? Time and time I have spoken about targeting the best, or targeting the best fit. I am an advocate of the latter, hence when approaching this question I answer very simply – ‘we are looking for the individual, that will fit into the culture and environment that is uniquely Oracle’. So, what is the environment/culture like here at Oracle? What makes Oracle so unique and a great place to work, especially as a graduate? Much like our business model, we are forward and innovative thinkers – we are not afraid to try new things, whether it is a success or failure. We are all highly driven, motivated and successful individuals – Oracle is a firm believer that in order to be driven, motivated and successful, you need to be surrounded by like minded people. And last, we are all autonomous and independent, self starters – at Oracle you are treated as an adult. We are not in the business of continually micro managing, nor constantly spoon feeding or holding your hand. Oracle has an amazing support, resource and training network – if you need support, extra training or resources it is there for your taking. And of course, if you do it on your own accord, you will learn it much quicker. For those reasons, Oracle is unique in its environment – we ensure and set up everyone for success. With such a great working environment/culture, why wouldn’t you choose Oracle? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • Check Out Eye Tracking, Mobile, and Fusion Apps at Apps UX Demo Pods

    - by Oracle OpenWorld Blog Team
    By Kathy Miedema, Oracle Applications User Experience Among the many cool things to see at the Oracle OpenWorld DEMOgrounds this year will be demo pods featuring some of the cutting-edge tools in Oracle’s arsenal of usability evaluation methods.OK, so we’re bragging a little. But past conference goers agree – these demos consistently hit the Top 10 for number of visits. Why? Because you get to try out our eye-tracking tool, which follows where a user looks on a screen and helps the UX team decipher issues with navigation design. Or you can see our facial gesture analysis tool in action, which helps us read the emotions you might be experiencing as you look at a screen – happy, sad, or dismayed, to name a few. Are you interested in Oracle’s strategy for user experience? Come to the Apps UX pods for a look at enterprise applications on mobile devices including smart phones and the iPad. Stay for a demo of self-service or CRM tasks in the Fusion Applications welcome experience. The DEMOgrounds for Oracle Applications are located on the lower level of Moscone West. Hours for the Exhibition Hall are Monday, October 1: 9:30 a.m. to 6:00 p.m. Tuesday, October 2: 9:45 a.m. to 6:00 p.m. Wednesday, October 3: 9:45 a.m. to 4:00 p.m.  Not yet registered for Oracle OpenWorld? Register now!

    Read the article

  • Deciding On Features For Open Source

    - by Robz / Fervent Coder
    Open source feature selection is subjective. An interesting question was posed to me recently at a presentation - “How do you decide what features to include in the [open source] projects you manage?” Is It Objective? I’d like to say that it’s really objective and that we vote on features and look at what carries the most interest of the populace. Actually no I wouldn’t. I don’t think I would enjoy working on open source (OSS) as much if it someone else decided on what features I should include. It already works that way at work. I don’t want to come home from work and work on things that others decide for me unless they are paying me for those features. So how do I decide on features for our open source projects? I think there are at least three paths to feature selection and they are not necessarily mutually exclusive. Feature Selection IS the Set of Features For the Domain Your product, in whatever domain it is in, needs to have the basic set of features that make it answer the needs of that domain. That is different for every product, but if you take for example a build tool, at the very least it needs to be able to compile source. And these basic needed features are not always objective either. Two people could completely disagree what makes for a required feature to meet a domain need for a product. Even one person may disagree with himself/herself about what features are needed based on different timeframes. So that leads us down to subjective. Feature Selection IS An Answer To Competition Some features go in because the competition adds a feature that may draw others away from your product offering. With OSS, there are all free alternatives, so if your competition adds a killer feature and you don’t, there isn’t much other than learning (how to use the other product) to move your customers off to the competition. If you want to keep your customers, you need to be ready to answer the questions of adding the features your competition has added.  Sometimes it’s about adding a feature that your competition charges for, but you add it for free. That draws people to the free alternative – so sometimes that adds a motivation to select a feature. Sometimes it’s because you want those features in your product, either to learn how you can answer the question of how to do something and/or because you have a need for that feature and you want it in your product. That also leads us down the road to subjective. Feature Selection IS Subjective I decide on features based on what I want to see in the product I am working on. Things I am interested in or have the biggest need for usually get picked first, with things that do not interest me either coming later or not at all. Most people get interested in an area of OSS because it solves a need for them and/or they find it interesting. If one of these two things is not happening and they are not being paid, it’s likely that person will move on to something else they find interesting or just stop OSS altogether. OSS feature selection is just that – subjective. If it wasn’t, it wouldn’t be opinionated and it wouldn’t have a personality about it. Most people like certain OSS because they like where the product is going or the personalities behind the product. For me, I want my products to be easy to use and solve an important problem. If it takes you more than 5-10 minutes to learn how to use my product, I know you are probably going somewhere else. So I pick features that make the product easy to use and learn, and those are not always the simplest features to work on. I work for conventions and make the product opinionated, because I think that is what makes using a product easier, if it already works with little setup. And I like to provide the ability for power users to get in and change the conventions to suit their needs. So those are required features for me above and beyond the domain features. I like to think I do a pretty good job at this. Usually when I present on something I’ve created, I like seeing people’s eyes light up when they see how simple it is to set up a powerful product like UppercuT. Patches And/Or Donations But remember before you say I’m a bad person or won’t use my product, I’ll always accept patches or I might like the feature that you suggest. If you like using the products I provide and they solve a problem for you the two biggest compliments you can provide are either a patch or a donation.  If you think the product is great, but if it could do this one other thing, it would be awesome(!), then consider contacting me and providing a patch, or consider contacting me with a donation and a request to put the feature in. And alternatively, if it’s a big feature, you could hire me to work on the product to make it even better. What If There Are Multiple Committers? In the question of multiple committers, I choose that someone always makes the ultimate decision to select whether a feature should be part of a product or not. But for other OSS project maybe this is not the case. If there is not an ultimate decision maker, then there is the possibility of either adding every feature suggested or having a deadlock on two conflicting features.   So let me pose this question. If you work on Open Source, how do you decide on what features to put in your open source projects? How do you decide what doesn’t belong? What do you do when there are conflicting features?

    Read the article

  • Thanks to all attendees in Seattle and Toronto

    - by Mike Dietrich
    Must be an Oracle sponsored number plate ... Thanks to everybody who did attend to our Upgrade Workshops in Seattle and Toronto past week. Seattle had a quite unusual track setup with two parallel breakout sessions. We hope you've enjoyed it as well. And you'll find the slides for the keynote "New Features" and the "Upgrade Workshop - The Whole Story" presentations below. Toronto was quite amazing as well - with so many (hope not too many) people in this slightly crowded room at the Interconti in Toronto. We've got a lot of interesting and sometimes challenging questions. And we would like to thank you for your patience Please find all the slides here: Upgrade Workshop ~545 slides "The Whole Story" presentation New Features for Oracle Database 11g Release 2 - Roy's keynote from Seattle  For me it was the first time in Canada and even though it was a very short stopover I did enjoy it very much. Roy and me had a dinner at CN Tower and besides good food some marvelous view. Didn't know before that Toronto within its city limits it's the fifth most populous city in North America. And even though paritally Air Canada ground personell was on strike I did catch my flight to Boston after the workshop Thanks again and hope to see you next time again - happy upgrades Mike

    Read the article

  • Project Coin: JSR 334 has a Proposed Final Draft

    - by darcy
    Reaching nearly the last phase of the JCP process, JSR 334 now has a proposed final draft. There have been only a few refinements to the specification since public review: Incorporated language changes into JLS proper. Forbid combining diamond and explicit type arguments to a generic constructor. Removed unusual protocol around Throwable.addSuppressed(null) and added a new constructor to Throwable to allow suppression to be disabled. Added disclaimers that OutOfMemoryError, NullPointerException, and ArithmeticException objects created by the JVM may have suppression disabled. Added thread safely requirements to Throwable.addSuppressed and Throwable.getSuppressed. Next up is the final approval ballot; almost there!

    Read the article

  • Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation

    - by Juan Camilo Ruiz
    This new post on the series of ADF integration with Oracle E-Business Suite, was written by Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team. Based on a previous post of the series, a reader asked what to do if you have an existing responsibility assigned to lots of users, instead of the UMX role that the Oracle E-Business Suite SDK for Java JAAS Implementation requires.  It would be tedious to assign a new role directly to hundreds or thousands of users, so naturally we’d like to avoid that if possible. Most people don’t know this, but it’s possible to assign a UMX role to a responsibility in Oracle User Management. Once you do that, users with your responsibility will all inherit your UMX role automatically. You can then proceed with using your UMX role with JAAS for ADF. Here is how to assign a UMX role to a responsibility in Oracle E-Business Suite: In the User Management responsibility, go to the Roles & Role Inheritance page. Search for the responsibility you want. In the search results table, click the “View In Hierarchy” icon for your responsibility. Note that the codes for responsibilities start with FND_RESP, while the codes for roles start with UMX. In the Role Inheritance Hierarchy, click on the Add Node icon (green plus + ) for your responsibility. Now you will see what appears to be the same page again but it is a little different (note the text at the top telling you the role you select will be inherited…).  This time, either search or expand nodes until you find your custom UMX role.  Use the Quick Select to choose that role. You will be sent back to the first screen, where you should see a confirmation message at the top. On the same page you can verify that the custom UMX role is underneath the responsibility.  You may need to expand one or more nodes to see the UMX role under the responsibility. You might see some other roles that have been inherited as well. Now that your users have the UMX role, you can test that the UMX role is being passed through to your ADF application through the Oracle E-Business Suite SDK for Java JAAS feature. Happy coding!

    Read the article

  • NNTP bridge for MS forums

    - by Luca Calligaris
    For those who wants to use their newsreader to interact with MS forums there's a new tool: the NNTP Bridge application serves as a channel that enables access for NNTP newsreaders to read and write content to Microsoft Forums. You can download the applcation and documentation from http://connect.microsoft.com/MicrosoftForums (registration required).

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Exam 70-518 Pro: Designing and Developing Windows Applications Using Microsoft .NET Framework 4

    - by Raghuraman Kanchi
    Today I noticed some topics from questions in the beta exam 70-518 which stumped me. I am just mentioning the topics below for future understanding and reference. This exam made me feel as if I was attempting questions about .NET 4.0 Framework. 1. Content-based vs. context-based filtered routing – Deciding the nearest Geographical Database. 2. Choosing an appropriate strategy for communicating with COM components, mainframe services 3. Microsoft Sync Framework 4. PLINQ 5. Difference between Dispatcher.BeginInvoke and Dispatcher.Invoke 6. Accessibility Testing/Scalability Testing (This objective may include but is not limited to: recommending functional testing, recommending reliability testing (performance testing, stress testing, scalability testing, duration testing)) 7. profiling, tracing, performance counters, audit trails 8. local vs. centralized reporting

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >