Search Results

Search found 13341 results on 534 pages for 'obiee performance tuning'.

Page 7/534 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Will JVisualVM degrade application performance?

    - by rocky
    I have doubts in JVisual VM profiler tool related to performance. I have requirement to implement a JVM Monitoring tool for my enterpise java application. I have gone through some profiling tools in market but all them are having some kind of agent file which we need include in server startup. I have a fear that these client agent will degrade my application performance will more. So I have decided to JVisual VM because this profiler tool comes with JDK itself but before implementing JVisualVM, does anybody faces any issues with JVisualVM profiler tool? As well as, is this safe if I implement in application?

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Programmer performance

    - by RSK
    I am a PHP programmer with 1 year of experience. As I am just starting my career, I am learning a lot of things now. I can say I am a little bit of a perfectionist. When I am assigned a problem I start off by Googling. Then, even when I find a solution, I keep trying for a better one until I find 2-3 options. Then I start learning it and choose the best performing solution. Even though I am learning a lot, this process gets me labeled as a low performer. My questions: As a novice, should I continue to use this learning process and not worry about my performance? Should I focus more on my performance and less on how the code performs?

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Service and/or tool to monitor performance?

    - by chris
    I am seeing wildly different performance from a clients web site, and would like to set up some sort of monitoring. What I'm looking for is a service that will issue requests to a couple of URLs, and report on the time it took to process the page - TTFB and time to download the entire page - that means I need something that will process javascript & css. Are there services like this? I've seen a few that monitor uptime, but they don't seem to report on the overall page performance.

    Read the article

  • Ios Game with many animated Nodes,performance issues

    - by user31929
    I'm working in a large map upside-down game(not tiled map),the map i use is a city. I have to insert many node to create the "life of the city",something like people that cross the streets,cars,etc... Some of this characters are involved in physics and game logic but others are only graphic characters. For what i know the only way i can achive this result is to create each character node with or without physic body and animate each character with a texture atlas. In this way i think that i'll have many performance problems, (the characters will be something like 100/150) even if i'll apply all the performance tips that i know... My question is: with large numbers of characters there another programming pattern that i must follow ? What is the approch of game like simcity,simpsons tapped out for ios,etc... that have so many animation at the same time?

    Read the article

  • Further Performance Tuning on Medium SharePoint Farm?

    - by elorg
    I figured I would post this here, since it may be related more to the server configuration than the SharePoint configuration or a combination of both? I'm open for ideas to try, or even feedback on things that maybe have been configured incorrectly as far as performance is concerned. We have a medium MOSS 2007 install prepped and ready for receiving the WSS 2003 data to upgrade. The environment was originally architected by a previous coworker, and I have since added a few configuration modifications to assist with performance before we finally performed the install. When testing the new site collections & SharePoint install (no actual data yet), things seemed a bit slow. I had assumed that it was because I was accessing it remotely. Apparently the client is still experiencing this and it is unacceptably slow. 1 SQL Server running SQL Server 2008 2x SharePoint WFEs - hosting queries (no index) 1x SharePoint Index - hosting index (no queries) MOSS 2007 installed and patched up through December '09 on WFEs & Index All 4 servers are VMs, should have more than sufficient disk space & RAM (don't recall at the moment), and are running Windows Server 2008 - everything is 64-bit. The WFEs have Windows NLB configured, with a DNS name & IP for the NLB cluster. Single NIC on each server (virtual, since VMWare). The Index server is configured as a WFE (outside of the NLB cluster) so that it can index itself and replicate the indexes to the WFEs that will serve the queries. Everything is configured & working properly - it just takes a minute or two to load a page on the local LAN. The client is still using their old portal (we haven't started the migration/upgrade just yet) so there's virtually no data or users. We need to either further tune the configuration, or fix anything that may have been configured incorrectly which is causing this slowness? I've already reviewed & taken into account everything that I could find that was relevant before we even started the install. Does anyone have ideas or pointers? Perhaps there's something that I've missed?

    Read the article

  • Buzzword for "performance-aware" software development

    - by errantlinguist
    There seems to be an overabundance of buzzwords for software development styles and methodologies: Agile development, extreme programming, test-driven development, etc... well, is there any sort of buzzword for "performance-aware" development? By "performance awareness", I don't necessarily mean low-latency or low-level programming, although the former would logically fall under the blanket term I'm looking for. I mean development in which resources are recognised to be finite and so there is a general emphasis on low computational complexity, good resource management, etc. If I was to be snarky, I would say "good programming", but that doesn't seem to get the message across so well...

    Read the article

  • OBIEE 11.1.1.5.0BP4 Bundle Patch now Available

    - by THE
    Oracle Business Intelligence Enterprise Edition - OBIEE 11.1.1.5.4 (a.k.a 11.1.1.5.0BP4) bundle patch is now available for download from  My Oracle Support . 11.1.1.5.4 bundle patch includes 40 bug fixes.11.1.1.5.4 bundle patch is cumulative, so it includes everything in 11.1.1.5.0PS1, 11.1.1.5.0BP2 and 11.1.1.5.0BP3. Please note that this release is only recommended for BI customers.Bundled Patch DetailsBug# 14517379 (tracking bug for obiee bundle patch 11.1.1.5.4)This patch is available for the supported platforms : Microsoft Windows (32-bit) Linux x86 (32-bit) Microsoft Windows x64 (64-bit) Linux x86-64 (64-bit) Oracle Solaris on SPARC (64-bit) Oracle Solaris on x86-64 (64-bit) IBM AIX PPC (64-bit) HPUX- IA (64-bit)

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • VB.Net IO performance

    - by CFP
    Having read this page, I can't believe that VB.Net has such a terrible performance when it comes to I/O. Is this still true today? How does the .Net Framework 2.0 perform in terms of I/O (taht's the version I'm targeting)?

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • Scream if you want to go faster

    - by simonsabin
    My session for 24hrs of pass on High Performance functions will be starting at 11:00 GMT thats migdnight for folks in the UK. To attend follow this link https://www.livemeeting.com/cc/8000181573/join?id=N5Q8S7&role=attend&pw=d2%28_KmN3r The rest of the sessions can be found here http://www.sqlpass.org/24hours/2010/Sessions/ChronologicalOrder.aspx So far the sessions have been great so no pressure :( See you there in 4.5 hrs...(read more)

    Read the article

  • SQLAuthority News – SQL Server Performance Series Hyderabad / Pune – Nov/Dec 2010

    - by pinaldave
    Just a quick note that SQL Server Performance Tuning and Optimizations Seminar series which I am offering at Hyderabad and Pune are almost all sold out. Read the details of the earlier successful seminar conducted at Colombo, Sri Lanka over here. Hyderabad Nov 27-28, 2010 (Last 3 Seats Left) Best Western Amrutha Castle 5-9-16, Opp. Secretriat, Saifabad, Khairatabad Hyderabad, Andhra Pradesh Pune Dec 04-05, 2010 (Last 6 Seats Left) Location TBA as we are looking for larger capacity room. I promise that this is going to be great fun as this sessions are very different then any usual sessions you have ever attended. This sessions are absolutely interactive and all the attendees will feel part of the event. As larger group are not convenient we are limited this seminars to very small group of people. This way attendees can go to instructors any time and feel connected. This 2-day seminar will cover the best of the best concepts and practices from popular courses offered by Solid Quality Mentors. Instead of learning theory only, the seminar focuses on providing real world experience by using demos and scenarios derived from customer engagements. The seminar is uniquely structured and well-thought-out. Sessions are discussion- based and are designed to be an interactive gateway between the instructor and the participants for an optimal learning experience. The seminar is intended to be immersion-based where participants will have plenty of opportunities to get deeply involved in the concepts presented by the instructor. Agenda of the event To join the seminars drop me an email. My email address is pinal “at” SQLAuthority.com and IndiaInfo “at” SolidQ.com. If you specify SQLAuthority.com in Title, you will avail special discount in overall rates on specified price. Yes, a sure 20% I promise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Poor System performance on my machine running Ubuntu 12.04(Beta 2 updated to the present moment)

    - by Mohammad Kamil Nadeem
    Why is it that my system dies when multitasking(it is happening from 11.10) on Ubuntu11.10(Unity), Kubuntu 11.10(KDE) and Deepin Linux which is based on 11.10(Gnome-Shell) The thing is that I thought with 12.04 I would get performance like I used to get on 11.04 on which everything used to run fine without any lag or hiccups. The same lagging(Browser starts to stutter, increased delay in the launching of dash and applications)is happening on 12.04 http://i.imgur.com/YChKB.png and http://i.imgur.com/uyXLA.png . I believe that my system configuration is sufficient for running Ubuntu as you can check here http://paste.ubuntu.com/929734/ . I had the Google voice and chat plugin installed on 12.04 so someone suggested that I should remove that and see if the performance improves but no such respite(I am having this on multiple operating system based on Ubuntu 11.10 as I have mentioned above). On a friends suggestion Ran Memory Test through Partition Magic and my system passed that fine. One thing more that I would like to know is that why when I have 2Gb Ram and 2.1GB swap does my system starts lag and run poorly when Ram consumption goes 500+. If you require anymore information I will gladly provide it.

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • How to check system performance?

    - by Woltan
    Hi all, I am a new Ubuntu user and really like the look and the features of the OS. However, I have a feeling that the performance could be better. With that I mean: Somehow the scrolling within firefox of sites seems laggy. I do not know how I should measure it but there is a difference. Not that it is unusable but it is aggravating. Java programs are running really slow. As a comparison (I know it is not a fair one), I tried to run a game using wine. The graphic specifications using windows were much higher (1600x1200) with a high level of detail, and in ubuntu with the lowest level of detail 1024x768 was the maximum. (My graphics card is a GeForce GTS 450 with 1gb RAM) Coming to my question: Is there a way to measure the performance of 3D acceleration, java applets, firefox scrolling etc. with a tool and compare it with lets say a windows OS or other users having almost the same hardware? Maybe it is a setup issue where some fundamental drivers are missing or something!? Any help, link, suggestion is appreciated! Cherio Woltan

    Read the article

  • Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

    - by user702295
    Hello!   There is a new document available: Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1) Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG The table reorganization can be setup to automatically run in version 7.3.1.5.  In version 12.2.2 we run the TABLE_REORG.CHECK_REORG function at every appserver restart. If the function recommends a reorg then we strongly encourage to reorg the database object.  This is documented in the official docs. In versions 7.3.1.3 and 7.3.1.4, the TABLE_REORG module exists and can be used. It has two main functions that are documented in the Implementation Guide Supplement, Release 7.3, Part No. E26760-03, chapter 4. In short, if you are using version 7.3.1.3 or higher, you can check for the need to run a reorg by doing the following 2 steps: 1. Run TABLE_REORG.CHECK_REORG('T'); 2. Check the table LOG_TABLE_REORG for recommendations If you are on a version before 7.3.1.3, you will need to follow the instructions below to determine if you need to do a manual reorg. How to determine if a table reorg is needed 1. It is strongly encouraged by DEV that You gather statistics on the required table.  The prefered percentage for the gather is 100%. 2. Run the following SQL to evaluate how table reorg might affect Primary Key (PK) based access:   SELECT ui.index_name,trunc((ut.num_rows/ui.clustering_factor)/(ut.num_rows/ut.blocks),2) FROM user_indexes ui, user_tables ut, user_constraints uc WHERE ui.table_name=ut.table_name AND ut.table_name=uc.table_name AND ui.index_name=uc.index_name AND UC.CONSTRAINT_TYPE='P' AND ut.table_name=upper('&enter_table_name');   3. Based on the result: VALUE ABOVE 0.75 - DOES NOT REQUIRE REORG VALUE BETWEEN 0.5 AND 0.75 - REORG IS RECOMMENDED VALUE LOWER THAN 0.5 - IT IS HIGHLY RECOMMENDED TO REORG

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >