Search Results

Search found 69138 results on 2766 pages for 'oracle data mining'.

Page 433/2766 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Exadata Parameter _AUTO_MANAGE_EXADATA_DISKS

    - by AVargas
    Normal 0 false false false EN-US X-NONE HE MicrosoftInternetExplorer4 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Exadata auto disk management is controlled by the parameter _AUTO_MANAGE_EXADATA_DISKS. The default value for this parameter is TRUE.When _AUTO_MANAGE_EXADATA_DISKS is enabled, Exadata automate the following disk operations:If a griddisk becomes unavailable/available, ASM will OFFLINE/ONLINE it.If a physicaldisk fails or its status change to predictive failure, for all griddisks built on it ASM will DROP FORCE the failed ones and DROP the ones with predictive failures.If a flashdisk performance degrades, if there are griddisks built on it, they will be DROPPED FORCE in ASM.If a physicaldisk is replaced, the celldisk and griddisks will be recreated and the griddisks will be automatically ADDED in ASM, if they were automatically dropped by ASM. If you manually drop the disks, that will not happen.If a NORMAL, ONLINE griddisk is manually dropped, FORCE option should not be used, otherwise the disk will be automatically added back in ASM. If a gridisk is inactivated, ASM will automatically OFFLINE it.If a gridisk is activated, ASM will automatically ONLINED it. There are some error conditions that may require to temporarily disable _AUTO_MANAGE_EXADATA_DISKS.Details on MOS 1408865.1 - Exadata Auto Disk Management Add disk failing and ASM Rebalance interrupted with error ORA-15074. Immediately after taking care of the problem _AUTO_MANAGE_EXADATA_DISKS should be set back to its default value of TRUE. Full details on Auto disk management feature in Exadata (Doc ID 1484274.1)

    Read the article

  • dnssec zonesigner ignoring out-of-zone data

    - by jordi12100
    I am trying to configure DNSSec with BIND9 on CentOS 6.4 running DirectAdmin control panel. I am using this tutorial to make it work: https://www.dnssec-tools.org/wiki/index.php/Zonesigner But I can't get it work... When I run this command: zonesigner --genkeys jordikroon.nl.db jordikroon.nl.db.signed I get this error: jordikroon.nl.db:17: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:18: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:22: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:29: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:33: ignoring out-of-zone data (jordikroon.nl) zone jordikroon.nl.db/IN: has no NS records zone jordikroon.nl.db/IN: not loaded due to errors. I can't find anything on the web about this error. This is my zone db file: $TTL 14400 @ IN SOA ns1.ghservers.org. hostmaster.jordikroon.nl. ( 2013090703 14400 3600 1209600 86400 ) jordikroon.nl. 14400 IN NS ns1.ghservers.org. jordikroon.nl. 14400 IN NS ns2.ghservers.org. cp 14400 IN A 85.17.32.228 ftp 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN A 85.17.32.228 localhost 14400 IN A 127.0.0.1 mail 14400 IN A 85.17.32.228 pop 14400 IN A 85.17.32.228 smtp 14400 IN A 85.17.32.228 www 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN MX 10 mail jordikroon.nl. 14400 IN TXT "v=spf1 a mx ip4:85.17.32.228 ~all" localhost 14400 IN AAAA ::1 How do I have to fix this? All IN keywords are being ignored. Any help is welcome:-)

    Read the article

  • Neuste Version zum Download: Apex 4.2 ist da!

    - by britta wolf
    Seit dem 12. Oktober 2012 steht APEX 4.2 zum Download bereit. Schnell installieren..... und dann gleich die neuen Features ausprobieren, z.B. das einfache, deklarative Erstellen von APEX-Anwendungen für mobile Endgeräte oder HTML5-Diagramme. Aber auch darüber hinaus gibt es weitere Neuerungen.: so wurde zum Beispiel der Excel-Upload für den Endanwender verbessert. Ausserdem kann man nun 200 (anstelle von 100) Elemente auf eine Seite setzen.

    Read the article

  • ISE (Germany) and Techdata Azlan: Exadata win over IBM at Immonet

    - by Javier Puerta
    Immonet, a subsidiary of German media company Axel Springer, provides cross-media real estate marketing via the Internet, newspapers, and other channels. The Immonet.de website is the number two German property portal with approximately 1.8 million unique visitors per month and over than 950,000 current online offerings. Read here how ISE solved with Exadata the performance problems that Immonet was experiencing.

    Read the article

  • Custommer Centric Wealth Management

    - by michael.seback
    While the world continues to search their way out of the recent financial turmoil and recession, it has no doubt churned out the inherent faults in the wealth management industry and the larger financial system. In order to counter these apprehensions, wealth management firms are now actively seeking and evaluating avenues to re-build the lost trust. They are looking at engaging their customers in managing their investments in a more collaborative and transparent manner. At the same time, wealth managers are also seeking to empower themselves with complete and comprehensive customer information in order to provide the best advice and the best solution at the right time. Read your copy of this new global White Paper on Wealth Management.

    Read the article

  • Attention Extension Developers: Your input wanted!

    - by John 'JB' Brock
    Your Input Wanted! I've posted a lot of different topics throughout 2011, and would really like to provide info that is most important to you, the extension developer, as we head for 2012. What are the most important areas that you want to learn more about? Post your requests for examples and topics in the comments section. Let me know what you are struggling with, or something that you worked out, but it took way to long to figure out.  I'll take the list and do my best to provide samples over the coming months. Please provide the version of JDeveloper that you want the topic to cover. Remember: 11gR1 = 11.1.1.x (e.g. 11.1.1.5.0) 11gR2 = 11.1.2.x (e.g. 11.1.2.1.0) Thanks in advance for your comments and suggestions.  Let's get the JDev Extension community going in 2012! --jb John "JB" BrockOracle Product Manager - JDev ESDK

    Read the article

  • Conky to Monitor WLS Managed Servers

    - by John Graves
    I've been using a little utility on my linux-based machines for years called conky.  It can be used to monitor system resources, but I wanted to modify it to monitor my WebLogic managed servers too. Once installing conky, you'll need to update the .conkyrc file.  Here is a simple example. Basically, the important lines are these: - Admin (7001) ${if_empty ${exec /usr/sbin/lsof -i :7001 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 7001 7001 count}) ${endif} - OSB (8011) ${if_empty ${exec /usr/sbin/lsof -i :8011 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 8011 8011 count}) ${endif} - BAM (9001) ${if_empty ${exec /usr/sbin/lsof -i :9001 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 9001 9001 count}) ${endif} - DB (1521) ${if_empty ${exec /usr/sbin/lsof -i :1521 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 1521 1521 count}) ${endif} It uses lsof to find out if ports are in use. Here is a video showing it in action.

    Read the article

  • Sweden: Hot Java in the Winter

    - by Tori Wieldt
    No, it's not global warming, but for some reason Sweden is a hotbed of great Java developers and great Java conferences in the winter. First, all three Swedish Java Champions are on Computer Sweden's 100 Best Swedish Developers List. You can read the full Sweden's Top 100 Developers article *if* you can read Swedish (or want to use Google Translate). Congratulations to:  Jonas Bonér, CTO Typesafe Skills: In recent years worked with solutions for scalability and availability. Previously, most between programs and compilers. Other qualifications: Located behind the framework Aspectwerkz and Akka platform for developing parallel, scalable and fault-tolerant software in Scala and Java. Rickard Oberg, Neo Technology Skills: Java, and the framework in Java EE and graph databases. Other qualifications: Founder of open source projects Xdoclet and Webwork. The latter is now called Struts second Rickard Oberg wrote the basics of the application server JBoss. Founder of Senselogic and architect of CMS and portal product SiteVision. Launched frameworkQi4j. Been a speaker at Java Zone JavaPolis, Jfokus, Øredev. Mattias Karlsson Skills: Java. Good at agile system development methods and architecture. Activity: telecom, banking, finance and insurance. Other qualifications: Runs Javaforum Stockholm. Arranges the conference Jfokus.  Frequent speaker at major international conferences such as JavaOne. Holds the title Java Champion. Also, Sweden is home to some top-notch Java Developer conferences during the Winter: jDays Gothenburg, Sweden, Dec 3-5. jDays, a dynamic Java developer conference, comes to Gothenburg. In addition to conference and presentations, visitors can join any courses in Java and related technologies for free.  Jfokus Stockholm, Sweden, Feb 4-6. Jfokus is the largest annual conference for everyone who works with Java in Sweden. The conference is arranged together with Javaforum, the Stockholm JUG.  Thanks to all the Java community who keep the Java hot in Sweden!

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

  • SOA Suite 11g Database Growth Management

    - by JuergenKress
    This whitepaper “Oracle SOA Suite 11g Database Growth Management”  has been written to highlight the need to implement an appropriate strategy to manage the growth the of SOA 11g database. The advice presented should facilitate better dialog between SOA and Database administrators when planning database and host requirements Whitepaper Oracle SOA Suite 11g Database Growth Management Advisor Webcast “Oracle SOA Suite 11g Database Growth Management” April 11th 2012 Author: Michael Bousamra Contributing Authors: Deepak Arora Sai Sudarsan Pogaru SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community Forum,SOA Specialization,purging,Michael Bousamra Contributing Authors: Deepak Arora Sai Sudarsan Pogaru,SOA Suite 11g Database Growth Management

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • WNA Configuration in OAM 11g

    - by P Patra
    Pre-Requisite: Kerberos authentication scheme has to exist. This is usually pre-configured OAM authentication scheme. It should have Authentication Level - "2", Challenge Method - "WNA", Challenge Direct URL - "/oam/server" and Authentication Module- "Kerberos". The default authentication scheme name is "KerberosScheme", this name can be changed. The DNS name has to be resolvable on the OAM Server. The DNS name with referrals to AD have to be resolvable on OAM Server. Ensure nslookup work for the referrals. Pre-Install: AD team to produce keytab file on the AD server by running ktpass command. Provide OAM Hostname to AD Team. Receive from AD team the following: Keypass file produced when running the ktpass command ktpass username ktpass password Copy the keytab file to convenient location in OAM install tree and rename the file if desired. For instance where oam-policy.xml file resides. i.e. /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Configure WNA Authentication on OAM Server: Create config file krb.config and set the environment variable to the path to this file: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf The variable KRB_CONFIG has to be set in the profile for the user that OAM java container(i.e. Wbelogic Server) runs as, so that this setting is available to the OAM server. i.e. "applmgr" user. In the krb.conf file specify: [libdefaults] default_realm= NOA.ABC.COM dns_lookup_realm= true dns_lookup_kdc= true ticket_lifetime= 24h forwardable= yes [realms] NOA.ABC.COM={ kdc=hub21.noa.abc.com:88 admin_server=hub21.noa.abc.com:749 default_domain=NOA.ABC.COM [domain_realm] .abc.com=ABC.COM abc.com=ABC.COM .noa.abc.com=NOA.ABC.COM noa.abc.com=NOA.ABC.COM Where hub21.noa.abc.com is load balanced DNS VIP name for AD Server and NOA.ABC.COM is the name of the domain. Create authentication policy to WNA protect the resource( i.e. EBSR12) and choose the "KerberosScheme" as authentication scheme. Login to OAM Console => Policy Configuration Tab => Browse Tab => Shared Components => Application Domains => IAM Suite => Authentication Policies => Create Name: ABC WNA Auth Policy Authentication Scheme: KerberosScheme Failure URL: http://hcm.noa.abc.com/cgi-bin/welcome Edit System Configuration for Kerberos System Configuration Tab => Access Manager Settings => expand Authentication Modules => expand Kerberos Authentication Module => double click on Kerberos Edit "Key Tab File" textbox - put in /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Edit "Principal" textbox - put in HTTP/[email protected] Edit "KRB Config File" textbox - put in /fa-gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Cilck "Apply" In the script setting environment for the WLS server where OAM is deployed set the variable: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Re-start OAM server and OAM Server Container( Weblogic Server)

    Read the article

  • Highlight Word add-in for Visual Studio 2010 [SSDT]

    - by jamiet
    I’ve just been alerted by my colleague Kyle Harvie to a Visual Studio 2010 add-in that should prove very useful if you are an SSDT user. Its simply called Highlight all occurrences of selected word and does exactly what it says on the tin, you highlight a word and it shows all other occurrences of that word in your script: There’s a limitation for .sql files (which I have reported) where the highlighting doesn’t work if the word is wrapped in square brackets but what the heck, its free, it takes about ten seconds to install….install it already! @Jamiet

    Read the article

  • Retail CEO Interviews

    - by David Dorf
    Businessweek's 2012 Interview Issue has interviews with three retail CEOs that are worth a quick read.  I copied some excerpts below, but please follow the links to the entire interviews. Ron Johnson, CEO JCPenney Take me through your merchandising. One of the things I learned from Steve [Jobs]—Steve said three times in his life he had the chance to be part of the change of an interface. If you change the interface, you can dramatically change the entire experience of the product. For Steve, that was the mouse, the scroll wheel on the iPod, and then the [touch]screen. What we’re trying to do here is change the interface of retail. What we call that is the street, and you’re standing in the middle of it. When you walk into a store today, you’re overwhelmed by merchandise. There is a narrow aisle. Typically, it’s filled with product on tables and you’re overwhelmed with the noise of signs and promotions. Especially in the age of the Internet, the idea of going to a very large store and having so much abundance is actually not very appealing. The first thing you find here is you’re inspired. I have used the mannequins. The street is actually this new navigation path for a retail store. So if you come in here—you’ll notice that these aisles are 14 feet wide. These are wider than Nordstrom’s (JWN). Slide show of JCPenney store. Walter Robb, co-CEO Whole Foods What did you learn from the recent recession about selling groceries?It was a lot of humble pie, because our sales experienced a drop that I have never seen in 32 years of retail. Customers left us in droves. We also learned that there were some very loyal customers who loved Whole Foods (WFM), people who said, “I like what you stand for. I like coming here. I like this experience.” That was very affirming. I think the realization was that we’ve got some customers, and we need to make sure we know who they are. So instead of chasing every customer out there, we started doing customer discussion groups. We were growing for growth’s sake, which is not a good strategy. We were chasing the rainbow. We cut the growth in half overnight and said, “All right, slow down. Let’s make sure we’re doing this better and more thoroughly and more thoughtfully.” This company is a mission-based company. This company started to change the world by bringing healthier food to the world. It’s not about the money, it’s about the impact, and this company is back on track as a result of those experiences. Video of Whole Foods store tour. Kay Krill, CEO Ann Taylor You’ve worked in retail all your life. What drew you to it?I graduated from college, and I did not know what I wanted to do. Macy’s (M) came to campus to interview for their training program, and I thought, “Let me give it a try.” I got the job and fell in love with the industry. The president of Macy’s at the time said, “If you don’t wake up every morning dying to go to work, then retailing is not for you; it has to be in your blood.” It was in my blood. I love the fact that every day is different. You can get to be creative one day, financial the next day, marketing the next. I love going to stores. I love talking to associates. I love talking to clients. There’s not a predictable day.

    Read the article

  • Highlights from recent Yammer video

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A few weeks back, Ryan Kennedy of Yammer gave a talk about Berkeley DB Java Edition. You can find it posted here on Alex Popescu's Blog, or go directly to the video post itself. It was full of useful nuggets of information, such as why they chose to use BDB JE, performance, and some tips & tricks at the end. At over 40 minutes, the video is quite long. Ryan is an entertaining speaker, so I suggest you watch all of it. But if you only have time for the highlights, here are some times you can sync to:  06:18 hear the Berkeley DB JE features that caused Yammer select it, including: replication auto leader election, failover configurable durability and consistency guarantees 23:10 System performance characteristics 35:08 Check out the tips and tricks for using Berkeley DB JE I know the Berkeley DB development team is very pleased that BDB JE is working out well for Yammer. We definitely encourage others out there to take note of this success, especially if your requirements are similar to Yammer's (which Ryan outlines at the beginning of his talk)

    Read the article

  • WCF Data Service Pipeline

    - by Daniel Cazzulino
    For documentation purposes, I just draw the following UML sequence diagrams for the “Astoria” pipeline, using Visual Studio 2010 Ultimate: For a single-entity (or non-batched) request, this is the sequence: For a batch request, this is the sequence instead: DataService component is your own DataService<T>-derived class, and DataService.ProcessingPipeline refers to its ProcessingPipeline property pipeline events.   /kzu

    Read the article

  • SOA online seminar by Griffiths Waite &ndash; adopt Fusion Applications patterns today

    - by Jürgen Kress
    Our SOA Specialized partner Griffiths Waite developed a series of Oracle Fusion Middleware online seminars. Mark Simpson Oracle ACE Director gives an insight of the Oracle strategy, how Oracle is using Fusion Middleware to build Fusion Applications and how you can profit in your project from the Fusion Architecture. Giving examples how customers can adopt use cases for Application Integration & Composite Application Portals & Application Modernization & Business Process Management. If you are interested make sure you watch the online seminar and take the SOA Maturity Assessment For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: Mark Simpson,Griffiths Waite,Fusion Middleware,Fusion Applications,SOA,Oracle,SOA Community,OPN,SOA Specialization,Specialization,Jürgen Kress

    Read the article

  • Back from Istanbul - Presentations available for download

    - by Javier Puerta
       (Picture by Paul Thompson, 14-March-2012) On March 14-15th we have celebrated our 2012 Exadata Partner Community EMEA Forum, in Istanbul, Turkey. It has been an intense two days, packed with great content and a lot of networking. Organizing it jointly with the Manageability Partner Forum has allowed participants to benefit from the content of the Manageability sessions, which is a topic that is becoming key as we move to cloud architectures. During the sessions we have listened to two thought-leaders in our industry, Ron Tolido, from Capgemni, and Julian Dontcheff, from Accenture. We thank our Exadata partners -ISE (Germany), Inserve (Sweden), Fors (Russia), Linkplus (Turkey) and Sogeti,  for sharing with the community their experiences in selling and implementing Exadata and Manageability projects. The slide decks used in the presentations are now available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).I want to thank all who have participated at the event, and look forward to meeting again at next year's Forum.

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Adaptive Case Management OTN WebCast with Danilo Schmiedel

    - by JuergenKress
    Oracle ACE Director Danilo Schmiedel, SOA/BPM solution architect with Opitz Consulting in Germany, talks about Adaptive Case Management, Predictive Analytics, and Process Mining. Watch the video here. To download the Adaptive Case Management post mentioned in this interview, please visit the blog post. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: ACM,Adaptive Case Management,Danilo Schmiedel,Opitz,OTN,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some for

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >