Search Results

Search found 51747 results on 2070 pages for 'oracle database in memory'.

Page 91/2070 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • The Road to Professional Database Development: Database Normalization

    Not only is the process of normalization valuable for increasing data quality and simplifying the process of modifying data, but it actually makes the database perform much faster. To prove the point, Peter Larsson takes a large unnormalised database and subjects it to successive stages of normalisation. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Friday Tips #33

    - by Chris Kawalek
    Happy Friday, everyone! Our tip this week is from an excellent white paper written by our own Greg King titled Oracle VM 3: Building a Demo Environment using Oracle VM VirtualBox. In it, Greg gives you everything you need to know to set up Oracle VM Server inside of Oracle VM VirtualBox for testing and demoing. The section we're highlighting below is on how to configure the network interfaces of your virtual machines: VirtualBox comes with a few different types of network interfaces that can be used to allow communication between the VM guests and the host operating system, including network interfaces that will allow the VM guests to communicate with local and wide area networks accessed from your laptop or personal computer. However, for the purpose of the demonstration environment we will limit the network communication to include access just between your desktop and the virtual machines being managed by VirtualBox. The install process for Oracle VM VirtualBox creates a single host-only network device on your laptop or personal computer. Using the host-only network device will allow you to open a browser on your desktop to access the Oracle VM Manager running within the VirtualBox VM guest. The device will only allow network traffic between the VM guests and your host operating system, but nothing outside the confines of your laptop or personal computer. We will need to add a second host-only network since the Oracle VM Server appliance has both eth0 and eth1 configured. You can choose to use eth1 on the Oracle VM Servers or not use them – the choice is yours. But, at least the host side network device will exist if you decide to use it. Greg goes on to describe in detail how to setup the network interfaces, so you can head on over to the paper and get even more info. See you next week! -Chris 

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Update: Commercially Supported GlassFish VersionsAquarium blogger David Delabassee shares background information and links to where you can download the recently released GlassFish Server Bundle Patch 3.1.2.8. Read the article. Announcing WebLogic on Oracle Database Appliance 2.7Oracle WebLogic Server on Oracle Database Appliance 2.7 offers a complete solution for building and deploying enterprise Java EE applications in a fully integrated system of software, servers, storage, and networking that delivers highly available database and WebLogic services. Learn more. APAC Partner iDay: What's New in Oracle WebLogic, 8-Apr 12 noon SG/2pm AEDT/9:30 IST - Invite your Partners - Register Virtual Developer Conference:  Creating a Foundation for Cloud Applications using Oracle WebLogic and Oracle Coherence - OnDemand Webcast: WebLogic Configuration using Chef and Puppet - On-Demand Podcast Series: Part 3 - Oracle WebLogic Server and Oracle Database Integration - Podcast Coherence*Web: Sharing an httpSession Among Applications in Different Oracle WebLogic Clusters SOA solution architect Jordi Villena shows how easy it is to extend Coherence*Web to enable session sharing. Read the article. Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. Read the article. Video: Coherence Community on Java.net - 4 Projects available under CDDL-1.0 Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how you can actively participate in open source Coherence Community projects. Watch the video. Working with Oracle Security Token Service in an Architecture Involving Oracle WebLogic Server and Oracle Service Bus Oracle Fusion Middleware specialist Ronaldo Fernandes takes you step by step through the process of creating a single sign-on between Oracle WebLogic and Oracle Service Bus using Oracle Security Token Service (OSTS) to generate SAML tokens. Read the article. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

  • Ubuntu: Memory Leak

    - by Keener
    I'm having trouble finding from where this memory leak is occurring. I'm running Ubuntu 8.04 LTS on a Dell XPS M1530. I have 3GB of ram and I'm finding after about an hour or so of use top shows me 2GBs+ used. The strange thing is when I add up the memory percentages by PID either from top or ps aux I find that I should only be using about 20-25% of my available ram. What brought this to my attention was I've begun running vmware server again. Now, obviously the ram usage spikes when I load a virtual machine, but the memory VMware is using does not account for the memory usage I'm seeing via top or free. Stopping vmware server releases the memory which was allocated to it, but I'm still unable to find where this RAM is being used. After a complete reboot, of course, the memory is fine, but very quickly it climbs to 60-80% usage with the processes only appearing to account for a third of that. Any ideas where I should look for more information on what this could be?

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Best Practices around Oracle VM with RAC: RAC SIG webcast - Thursday, March 18th -

    - by adam.hawley
    The RAC SIG will be hosting an interesting webcast this Thursday, March 18th at 9am pacific time (5pm GMT) on: Best Practices around Oracle VM with RAC The adaptation of virtualization technology has risen tremendously and continues to grow due to the rapid change and growth rate of IT infrastructures. With this in mind, this seminar focuses on configuration best practices, examining how Oracle RAC scales & performs in a virtualized environment, and evaluating Oracle VM Server's ease of use. Roger Lopez from Dell IT will be presenting. This Week's Webcast Connection Info: ==================================== Webcast URL (use Internet Explorer): https://ouweb.webex.com/ouweb/k2/j.php?ED=134103137&UID=1106345812&RT=MiM0 Voice can either be heard via the webconference or via the following dial in: Participant Dial-In 877-671-0452 International Dial-In 706-634-9644 International Dial-In No Link http://www.intercall.com/national/oracleuniversity/gdnam.html Intercall Password 86336

    Read the article

  • ODI 11g - Oracle Data Integrator 11g – A Hands-On Tutorial

    - by David Allan
    I've have been asked by Packt publishing to review a brand new book on Oracle Data Integrator: Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial. Waiting on this book to arrive and see what goodies are inside, I'll blog a review later. The book can be found at Oracle Data Integrator 11g – A Hands-On Tutorial Looking at the table of contents, it looks like it gives a good broad introduction (including various data formats) to the product; Chapter 1: Product Overview Chapter 2: Product Installation Chapter 3: Using Variables Chapter 4: ODI Sources, Targets, and Knowledge Modules Chapter 5: Working with Databases Chapter 6: Working with MySQL Chapter 7: Working with Microsoft SQL Server Chapter 8: Integrating File Data Chapter 9: Working with XML Files Chapter 10: Creating Workflows—Packages and Load Plans Chapter 11: Error Management Chapter 12: Managing and Monitoring ODI Components Chapter 13: Concluding Remarks Looking forward to it.

    Read the article

  • Is Oracle WebCenter 11g a big SharePoint ?

    - by Guilherme J Santos
    Hi there. I have some experience with SharePoint. Now my company will use Oracle WebCenter to create a internal portal. Nobody of IT team have experience with Oracle WebCenter, and I think we could use SharePoint, as we did until now. So, what are the advantages to use Oracle WebCenter? Wich are your experiences with Oracle WebCenter 11g? And how much it is different from SharePoint. What can I do with Oracle WebCenter that I cannot do with Microsoft SharePoint?

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Poof and it’s gone - Internship @ Oracle Netherlands

    - by Tim Koekkoek
    We still remember the first day we walked in the office in September. The moment we walked into the big entrance hall and saw all those unfamiliar faces, we had no idea that we all had such diverse personalities, and still we all had a great time together. At the end of our internship we could all say we felt comfortable working at the office, playing “some” table tennis. Besides, it has been a great learning experience and we look back on a good time.  We made our own video and it shows you what some of us have been working on during our internship @ Oracle in the Netherlands.  If you are also interested in Oracle and what we have to offer, you can join our Live Google+ Hangout every Friday at 3 p.m. or visit http://campus.oracle.com.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Gartner PCC Summit, Baltimore - Oracle's Take

    - by [email protected]
    Back from last week's trip to the Gartner PCC Summit in Baltimore, Andy MacMillan and Ajay Gandhi share their impressions of the conference. According to Andy and Ajay: Interest in the sector is increasing - attendance at this year's conference was up by more than 50 percent The discussion at the conference this year shifted from a focus on what the tools are to how the tools can transform organizations and help build businesses Conference attendees were interested in taking a platform approach and looking to bring multiple tools together to solve problems and simplify business processes. If you are interested in learning more about the Bureau of Indian Affairs' deployment showcased in Ajay's session at the Gartner PCC Summit, come back soon - a detailed post is on its way.

    Read the article

  • Oracle + Sun Product Strategy Webcast Series

    - by Paulo Folgado
    The Oracle + Sun Product Strategy Webcast series is composed of informative, on-demand sessions that offer strategies for Sun's major product lines related to the company combination, explain how Oracle will deliver more innovation to our customers, and outline our approach to protecting customers' investments. Ranging from 5 to 27 minutes each, the Webcasts cover the strategies for hardware, systems, software, solutions, and partners.In addition, Judson Althoff, SVP, Worldwide Alliances and Channels, Oracle, followed up the Webcast series with a video FAQ to help answer the following top partner questions about the Oracle + Sun combination and the OPN Specialized program: What is the impact the overall combined company will have on the partners?What are Oracle's plans for selling direct and what is the impact to partners?How will Sun partners integrate into OPN Specialized?As a Sun partner, am I automatically migrated into OPN Specialized?Will Oracle continue to partner with other hardware vendors?How will Oracle map existing Sun investments and certifications into OPN Specialized?As a Sun partner new to Oracle, where should I be placing my focus?What can partners expect to see relative to Exadata V2?How do content delivery platforms (CDPs) fit into the Oracle framework?How do existing Sun Partners place orders?

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Nitesh Jain
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address.  In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma Ref : https://blogs.oracle.com/SOA/entry/oracle_openworld_healthcare_integration_session

    Read the article

  • The Oracle Retail Week Awards - Store Manager of the year

    - by user801960
    Below is a video featuring interviews with the nominees for the Oracle Retail Week Awards 2012 Store Manager of the Year Award, in which the nominees talk about the value of being nominated for an Oracle Retail Week Award and what it means to them to be recognised. The video includes interviews with ASDA CEO Andy Clarke, who talks about how important the store managers are to the functioning of a retail business. The nominees interviewed were: Ian Allcock from Homebase in Aylesford David Bickell from Argos in Milton Keynes Karl Lynsdale from Co-operative Food in Heathfield, Sussex Paul Norcross from B&Q in Bristol Darren Parfitt from Boots in Melton Mowbray Helen Smith from H Samuel in Manchester Oracle Retail would like to congratulate the winner, Ian Allcock from Homebase in Aylesford. Well done Ian!

    Read the article

  • Analyst Firm Gives Oracle Highest Rating for Local Government CRM

    - by michael.seback
    Gartner, Inc. has given Oracle a rating of "Strong Positive," the highest possible ranking, in its report "MarketScope for Local Government CRM Products." The report compares the offerings of nine providers of CRM commercial off-the-shelf software for local government agencies. Gartner notes that a provider receiving a Strong Positive ranking must be a "provider of strategic products, services or solutions..." and recommends that "customers continue with planned investments and potential customers consider this vendor a strong choice for strategic investments." "Local governments today face tough challenges as they are tasked with reducing costs while at the same time providing citizens with services and information more quickly and efficiently than ever before. Oracle is pleased to be recognized by Gartner with a Strong Positive rating in its 'MarketScope for Local Government CRM Products' report, as we believe it reflects our commitment to helping our public sector customers meet these challenges today and in the future," said Mark Johnson, senior vice president, Oracle Public Sector. Read the highlights.

    Read the article

  • Oracle Partner Store: Neuer Registrierungsprozess für Partner-Deals

    - by A&C Redaktion
    Vor kurzem wurden Sie per Email über die Neuerungen bei der Registrierung von Partner-Deals und -Opportunities informiert. Der Oracle Partner Store (OPS) soll nicht nur den Prozess vereinfachen und transparenter machen, sondern ein allumfassendes Tool für Sie werden: Von der Registrierung eines Deals (mit gleichzeitigem Projektschutz) über die tägliche Status-Anzeige bis zur abschließenden Bestellung können Sie ab 17. November 2012 alles im OPS verwalten. Nutzen Sie einfach eines der deutschen OPS-Trainings von Oracle University:  7. November 2012  10:00 CET Tel.: 069/22 22 16 106Conference Code: 4244390#Security Code: 008800# Webkonferenz-LinkPasswort: Partner1  14. November 2012  11:00 CET Tel.: 069/22 22 16 106Conference Code: 4244390#Security Code: 008800# Webkonferenz-LinkPasswort: Partner1  20. November 2012 10:00 CET Aufgezeichnetes Training von Oracle University Request ID: 10952(Link wird nachgereicht) Weitere Termine und Sprachen finden Sie hier.

    Read the article

  • Oracle WebCenter: Social Networking & Collaboration

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration.We recently talked with Carin Chan, Principal Product Manager at Oracle, around the topic of Social Networking and Collaboration. In today’s work environment, employees have come to expect social and collaborative services to augment their work environment. Whether it is to post a blog or to poll fellow coworkers, employees expect and demand access to highly integrated, collaborative work environments that allow them to quickly contribute at work -- whether it is to make informed decisions, contribute on projects, or share knowledge.Social and collaborative services from Oracle WebCenter add an immeasurable amount of value to achieving a modern user experience. Oracle WebCenter Services provides rich and comprehensive social computing services that include services such as wikis, blogs, instant messaging, presence, activity streams and graphs, and polls/surveys that offer employees access to rich collaborative services to work efficiently.Employees can create pages or spaces that mix and match collaborative services while bringing in data from other applications to share with groups, teams, or organizations. These out of the box social and collaborative services include: People Connections and Activity Streams enable users to quickly assemble and visualize their social business networks and track user activities.Activity Graphs tracks all user activities in real-time and gathers intelligence about these users, their connections and the way they use information to make educated recommendations and provide on the spot information discovery.Wikis and blogs enable the community authoring of documents and sharing of ideas and also allow for the gathering of feedback and comments on those ideas.Tags and links allow users to easily mark, connect and share information with others.RSS feeds are available to track new or changed information related to discussion forums, processes or activities in an Oracle WebCenter environment.Discussion forums enable sharing of group knowledge and easy creation of communities around specific topics.Announcements allow you to manage and publish important news to your user base.Instant Messaging and Presence enable real-time awareness and communication with available users in the context of a business task.Web and Voice Conferencing enables real-time communication with internal and external business users.Lists provide a way to manage list data directly on the web as well as export and import it from and to Microsoft Excel.Oracle WebCenter Analytics provides comprehensive reporting metrics on activity and content usage within portals or composite applications.Activity Streams allow you to track activities and visualize your business networks.While being able to integrate into your portal deployment, these services are also integrated into how users are already working. This includes integration with software such as Microsoft Outlook, Microsoft Office and mobile devices such as the Apple iPhone. These services are just a tip of the iceberg regarding social and collaborative services that Oracle WebCenter has to offer your employees. Be sure to keep checking back this week for in future posts, we’ll delve deeper into a few of these collaborative services and discuss how a combination of collaborative services offer a better portal deployment to empower business users. Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, social, activity streams, blogs, wikis

    Read the article

  • Free Virtual Developer Day - Oracle Fusion Development

    - by Grant Ronald
    You know, there is no reason for you or your developers not to be top notch Fusion developers.  This is my third blog in a row telling you about some sort of free training.  In this case its a whole on line conference! In this on line conference you can learn about the various components that make up the Oracle Fusion Middleware development platform including Oracle ADF, Oracle WebCenter, Business Intelligence, BPM and more!  The online conference will include seminars, hands-on lab and live chats with our technical staff including me!!  And the best bit, it doesn't cost you a single penny/cent.  Its free and available right on your desktop. You have to register to attend so click here to confirm your place.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >