Search Results

Search found 1631 results on 66 pages for 'optimize'.

Page 15/66 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • git pull fails "unalble to resolve reference" "unable to update local ref"

    - by Gabrielle
    When I do a git pull I get this error error: unable to resolve reference refs/remotes/origin/LT558-optimize-sql: No such file or directory From git+ssh://remoteserver/~/misk5 ! [new branch] LT558-optimize-sql -> origin/LT558-optimize-sql (unable to update local ref) error: unable to resolve reference refs/remotes/origin/split-css: No such file or directory ! [new branch] split-css -> origin/split-css (unable to update local ref) I've tried git remote prune origin, but it didn't help. Thanks in advance for any ideas.

    Read the article

  • C# 'is' type check on struct - odd .NET 4.0 x86 optimization behavior

    - by Jacob Stanley
    Since upgrading to VS2010 I'm getting some very strange behavior with the 'is' keyword. The program below (test.cs) outputs True when compiled in debug mode (for x86) and False when compiled with optimizations on (for x86). Compiling all combinations in x64 or AnyCPU gives the expected result, True. All combinations of compiling under .NET 3.5 give the expected result, True. I'm using the batch file below (runtest.bat) to compile and test the code using various combinations of compiler .NET framework. Has anyone else seen these kind of problems under .NET 4.0? Does everyone else see the same behavior as me on their computer when running runtests.bat? #@$@#$?? Is there a fix for this? test.cs using System; public class Program { public static bool IsGuid(object item) { return item is Guid; } public static void Main() { Console.Write(IsGuid(Guid.NewGuid())); } } runtest.bat @echo off rem Usage: rem runtest -- runs with csc.exe x86 .NET 4.0 rem runtest 64 -- runs with csc.exe x64 .NET 4.0 rem runtest v3.5 -- runs with csc.exe x86 .NET 3.5 rem runtest v3.5 64 -- runs with csc.exe x64 .NET 3.5 set version=v4.0.30319 set platform=Framework for %%a in (%*) do ( if "%%a" == "64" (set platform=Framework64) if "%%a" == "v3.5" (set version=v3.5) ) echo Compiler: %platform%\%version%\csc.exe set csc="C:\Windows\Microsoft.NET\%platform%\%version%\csc.exe" set make=%csc% /nologo /nowarn:1607 test.cs rem CS1607: Referenced assembly targets a different processor rem This happens if you compile for x64 using csc32, or x86 using csc64 %make% /platform:x86 test.exe echo =^> x86 %make% /platform:x86 /optimize test.exe echo =^> x86 (Optimized) %make% /platform:x86 /debug test.exe echo =^> x86 (Debug) %make% /platform:x86 /debug /optimize test.exe echo =^> x86 (Debug + Optimized) %make% /platform:x64 test.exe echo =^> x64 %make% /platform:x64 /optimize test.exe echo =^> x64 (Optimized) %make% /platform:x64 /debug test.exe echo =^> x64 (Debug) %make% /platform:x64 /debug /optimize test.exe echo =^> x64 (Debug + Optimized) %make% /platform:AnyCPU test.exe echo =^> AnyCPU %make% /platform:AnyCPU /optimize test.exe echo =^> AnyCPU (Optimized) %make% /platform:AnyCPU /debug test.exe echo =^> AnyCPU (Debug) %make% /platform:AnyCPU /debug /optimize test.exe echo =^> AnyCPU (Debug + Optimized) Test Results When running the runtest.bat I get the following results on my Win7 x64 install. > runtest 32 v4.0 Compiler: Framework\v4.0.30319\csc.exe False => x86 False => x86 (Optimized) True => x86 (Debug) False => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 64 v4.0 Compiler: Framework64\v4.0.30319\csc.exe False => x86 False => x86 (Optimized) True => x86 (Debug) False => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 32 v3.5 Compiler: Framework\v3.5\csc.exe True => x86 True => x86 (Optimized) True => x86 (Debug) True => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) > runtest 64 v3.5 Compiler: Framework64\v3.5\csc.exe True => x86 True => x86 (Optimized) True => x86 (Debug) True => x86 (Debug + Optimized) True => x64 True => x64 (Optimized) True => x64 (Debug) True => x64 (Debug + Optimized) True => AnyCPU True => AnyCPU (Optimized) True => AnyCPU (Debug) True => AnyCPU (Debug + Optimized) tl;dr

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • Keyword Research - Introducing Keyword Research

    What is keyword research? Let us start by saying that keywords are the words and phrases that people type into search engine search boxes to get information to solve their problems or to find out information about their interests. Keyword research is the art of finding out what these keywords are so that you can optimize your marketing or websites for them to get some of the search traffic from these search engines. The better your keyword research is the better as you can optimize your website or marketing to find those hungry buyers for your products and/or services.

    Read the article

  • Most efficient arc for developing cross-browser support?

    - by Chris Hasbrouck
    I'm curious to hear what approach people take to planning for cross-browser support when developing a website. There are generally two approaches I've seen developers take in their workflow: -optimize for webkit then apply hacks for IE7-9, or -optimize for IE7-8 then apply newer features for IE9/webkit Basically starting at the front of technology and working toward the back, or starting at the back of technology and working toward the front. How do you do things? What advantages or disadvantage do you perceive in the different way of doing things wrt to developing cross-browser support?

    Read the article

  • How to explain to non-technical person why the task will take much longer then they think?

    - by Mag20
    Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When developer estimates this task, they may divide it into steps: make some changes to Database optimize DB changes for speed add front end HTML write server side code add validation add client side javascript use unit tests make sure SEO is setup is working implement email confirmation refactor and optimize the code for speed ... These maybe hard to explain to non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to non-developer?

    Read the article

  • WebCenter Customer Spotlight: Marvel

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryMarvel Entertainment, LLC (Marvel) is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years. The customer wanted to optimize their brand licensing process, so Marvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. Marvel and their brand licensees have  now complete visibility of brand license operations including the history of approval request and related content.  Company OverviewMarvel Entertainment, LLC (Marvel) a wholly-owned subsidiary of The Walt Disney Company, is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years.  Marvel utilizes its character franchises in entertainment, licensing and publishing.   Sample  characters:    - Spider-Man    - Iron Man    - Captain America    - X-MEN    - Thor    - Avengers    - And a host of others  Business ChallengesMarvel wanted to optimize their brand licensing process for their characters and had following business requirements : Facilitating content worldwide Scalable and flexible infrastructure to manage multiple content types and huge file sizes Optimize the licensing process workflow trough automatic notifications, tracking reviews, issuing approvals, etc. Solution DeployedMarvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. The internal users can now manage all digital assets related to a character trough proper categorization of all items, workflow based review and approval of branding styles and a powerful search and retrieval service. The licensees of Marvel brands can now online develop and submit  concepts and prototypes which are reviewed and approved using a collaborative process. Business ResultMarvel and their brand licensees have now complete visibility of brand license operations including the history of approval request and related content. The character brand related content is now in the right place, at the right time at the user's fingertips with highly improved quality. Additional Information Marvel Open World Presentation Oracle WebCenter Content

    Read the article

  • Disconnected Online Customer Experience? Connect It with Oracle WebCenter

    - by Christie Flanagan
    Engage. Empower. Optimize. Today's customers have higher expectations and more choices than ever before. Successful organizations must deliver an engaging online experience that is personalized, interactive and consistent across all phases of the customer journey. This requires a new approach that connects and optimizes all customer touch points as they research, select and transact with your brand. Attend this webcast to learn how Oracle WebCenter: Works with Oracle ATG Commerce and Oracle Endeca to deliver consistent and engaging browsing, shopping and search experiences across all of your customer facing websites Enables you to optimize the performance of your online initiatives through integration with Oracle Real-Time Decisions for automated targeting and segmentation Connects with Siebel CRM to maintain a single view of the customer and integrate campaigns across channels Register now for the Webcast. Register Now Thurs., November 8, 2012 10 a.m. PT / 1 p.m. ET Presented by: Joshua DuhlSenior Principal Product ManagerOracle WebCenter Christie FlanaganDirector of Product MarketingOracle WebCenter

    Read the article

  • How do I avoid "Developer's Bad Optimization Intuition"?

    - by Mona
    I saw on a article that put forth this statement: Developers love to optimize code and with good reason. It is so satisfying and fun. But knowing when to optimize is far more important. Unfortunately, developers generally have horrible intuition about where the performance problems in an application will actually be. How can a developer avoid this bad intuition? Are there good tools to find which parts of your code really need optimization (for Java)? Do you know of some articles, tips, or good reads on this subject?

    Read the article

  • Google I/O 2012 - Breaking the JavaScript Speed Limit with V8

    Google I/O 2012 - Breaking the JavaScript Speed Limit with V8 Daniel Clifford Are you are interested in making JavaScript run blazingly fast in Chrome? This talk takes a look under the hood in V8 to help you identify how to optimize your JavaScript code. We'll show you how to leverage V8's sampling profiler to eliminate performance bottlenecks and optimize JavaScript programs, and we'll expose how V8 uses hidden classes and runtime type feedback to generate efficient JIT code. Attendees will leave the session with solid optimization guidelines for their JavaScript app and a good understanding on how to best use performance tools and JavaScript idioms to maximize the performance of their application with V8. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 3049 113 ratings Time: 47:35 More in Science & Technology

    Read the article

  • You Can Deliver an Engaging Online Experience Across All Phases of the Customer Journey

    - by Christie Flanagan
    Engage. Empower. Optimize. Today’s customers have higher expectations and more choices than ever before.  To succeed in this environment, organizations must deliver an engaging online experience that is personalized, interactive and consistent across all phases of the customer journey. This requires a new approach that connects and optimizes all customer touch points as they research, select and transact with your brand.  Oracle WebCenter Sites combines with other customer experience applications such as Oracle ATG Commerce, Oracle Endeca, Oracle Real-Time Decisions and Siebel CRM to deliver a connected customer experience across your websites and campaigns. Attend this Webcast to learn how Oracle WebCenter: Works with Oracle ATG Commerce and Oracle Endeca to deliver consistent and engaging browsing, shopping and search experiences across all of your customer facing websites Enables you to optimize the performance of your online initiatives through integration with Oracle Real-Time Decisions for automated targeting and segmentation Connects with Siebel CRM to maintain a single view of the customer and integrate campaigns across channels Register now for the Webcast.

    Read the article

  • Free Optimization Library in C#

    - by Ngu Soon Hui
    Is there any optimization library in C#? I have to optimize a complicated equation in excel, for this equation there are a few coefficients. And I have to optimize them according to a fitness function that I define. So I wonder whether there is such a library that does what I need?

    Read the article

  • Benchmarking a particular method in Objective-C

    - by Jasconius
    I have a critical method in an Objective-C application that I need to optimize as much as possible. I first need to take some easy benchmarks on this one single method so I can compare my progress as I optimize. What is the easiest way to track the execution time of a given method in, say, milliseconds, and print that to console.

    Read the article

  • mysql query optimization

    - by vamsivanka
    I would need some help on how to optimize the query. select * from transaction where id < 7500001 order by id desc limit 16 when i do an explain plan on this - the type is "range" and rows is "7500000" According to the some online reference's this is explained as, it took the query 7,500,000 rows to scan and get the data. Is there any way i can optimize so it uses less rows to scan and get the data. Also, id is the primary key column.

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Skype Optimization - Port Forwarding on a Router

    - by user19185
    I was watching this Video which talked about using port-forwarding to optimize your LAN for skype calls. According to the video, as explained in the first couple of minutes in the video, the reason you would need optimization is because if the person your call has a firewall setup, your connection has to go-through a third-party computer to connect to them. I believe I stated this correct (maybe not). None the less, my question is this: do both parties on the call need to enable port forwarding to optimize skype, or just one party (person)?

    Read the article

  • OPN Exchange @ OpenWorld –The Don’t Miss List!

    - by Oracle OpenWorld Blog Team
    By the OPN Communications Team Are you attending Oracle PartnerNetwork Exchange @ OpenWorld? If so, don’t miss these exciting events taking place throughout the week of the conference.Sunday, September 30·    The Global Partner Keynote with Judson Althoff and other senior executives (1:00 p.m.)           ·    OPN Exchange General Sessions that provide an overview of each OPN Exchange track including: Cloud, Engineered Systems, Industries, Technology and Applications (3:30 p.m.)·    The Social Media Rally Station, where partners can learn how to optimize their online presence (3:00 - 5:00 p.m.)·    The exclusive OPN Exchange AfterDark Reception, complete with the smooth sounds of Macy Gray (7:30 p.m.) Monday, October 1·    5K Partner Fun Run (6:00 a.m. - meet us at the W Hotel lobby, no registration necessary!)·    The Social Media Rally Station, where partners can learn how to optimize their online presence (10:00 a.m. - 6:00 p.m.) Throughout the week of the conference ·    Over 40 + OPN Exchange sessions ·    Test Fest exams ·    Networking opportunities at the OPN Lounge; lunches at the Howard Street Tent; food, drink, and talk at the Oracle OpenWorld Music Festival @ It’s a Wrap!; and much more!We look forward to seeing you there.

    Read the article

  • How does an optimizing compiler react to a program with nested loops?

    - by D.Singh
    Say you have a bunch of nested loops. public void testMethod() { for(int i = 0; i<1203; i++){ //some computation for(int k=2; k<123; k++){ //some computation for(int j=2; j<12312; j++){ //some computation for(int l=2; l<123123; l++){ //some computation for(int p=2; p<12312; p++){ //some computation } } } } } } When the above code reaches the stage where the compiler will try to optimize it (I believe it's when the intermediate language needs to converted to machine code?), what will the compiler try to do? Is there any significant optimization that will take place? I understand that the optimizer will break up the loops by means of loop fission. But this is only per loop isn't it? What I mean with my question is will it take any action exclusively based on seeing the nested loops? Or will it just optimize the loops one by one? If the Java VM complicates the explanation then please just assume that it's C or C++ code.

    Read the article

  • Partner Webcast - Is your Application Ready? Prove it with the Oracle Exastack Program

    - by Thanos
    At Oracle we design Engineered Systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimizes performance at every IT layer to simplify business operations, drive down costs and accelerate business innovation.As the Engineered System foundation platform, Oracle Exadata and Oracle Exalogic, run all of Oracle Cloud's services across a range of global data centers, delivering extreme performance, massive scalability, and fault tolerance that has no single point of failure.The Oracle Exastack Program enables you as an ISV to leverage Oracle's scalable, integrated infrastructure to test, tune and optimize your applications for high performance. By getting Exastack Ready and Exastack Optimized, your applications get formal recognition from Oracle and additional visibility, while you as an ISV receive additional set of OPN benefits. Don't miss this opportunity to learn more about how you can optimize your applications to run faster and more reliably leveraging Oracle Exastack, but also become more competitive letting everybody know you are ready. Agenda: Oracle Engineered Systems Strategy OPN Exastack Program Benefits & Objectives Value for You Oracle is resourced for your success How to Apply –Demo Next Steps & Useful contacts Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday 06 December 2012, 10.00 CET (GMT+1) Duration: 1 hour Register Now! " height="6"> For any questions please contact us at [email protected] our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >