Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 55/572 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Why is mesh baking causing huge performance spikes?

    - by jellyfication
    A couple of seconds into the gameplay on my Android device, I see huge performance spikes caused by "Mesh.Bake Scaled Mesh PhysX CollisionData" In my game, a whole level is a parent object containing multiple ridigbodies with mesh colliders. Every FixedUpdate(), my parent object rotates around the player. Rotating the world causes mesh scaling. Here is the code that handles world rotation. private void Update() { input.update(); Vector3 currentInput = input.GetDirection(); worldParent.rotation = initialRotation; worldParent.DetachChildren(); worldParent.position = transform.position; world.parent = worldParent; worldParent.Rotate(Vector3.right, currentInput.x * 50f); worldParent.Rotate(Vector3.forward, currentInput.z * 50f); } How can I get rid of mesh scaling ? Mesh.Bake physx seems to take effect after some time, is it possible to disable this function ? The profiler looks like this: Bottom-left panel shows data before spikes, the right after

    Read the article

  • Oracle President Mark Hurd Highlights How Data-driven HR Decisions Help Maximize Business Performance

    - by Scott Ewart
    HR Intelligence Can Help Companies Win the Race for Talent Today during a keynote at Taleo World 2012, Oracle President Mark Hurd outlined the ways that executives can use HR intelligence to help them make better business decisions, shape the future of their organizations and improve the bottom line. He highlighted that talent management is one of the top three focus areas for CEOs, and explained how HR intelligence can help drive decisions to meet business objectives. Hurd urged HR leaders to use data to make fact-based decisions about hiring, talent management and succession to drive strategic growth. To win the race for talent, Hurd explained that organizations need powerful technology that provides fact-based valuable insight that is needed to proactively manage talent, drive strategic initiatives that promote innovation, and enhance business performance. To view the full story and press release, click here.

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Crystal Reports: 5 Tests for Top Performance

    Your masterpiece report is now complete. It doesn't just meet your customer’s expectations, it blows them out of the water. All they want is a beautifully-summarized report that can be displayed in a myriad of ways. Then disaster strikes! You try to run the report for a month against the live database and not the two days worth of test data you used for development, then your report’s runtime goes from twenty seconds to two hours. Every Crystal Reports developer has experienced this situation and it can be one of the most frustrating aspects of report design. Thankfully there are a variety of things that can be done to combat bad performance, any one of which can reap huge benefits...

    Read the article

  • Is there any performance comparison between Perl web frameworks?

    - by DVK
    I have seen mentions (which sounded like unsubstantiated opinions, and dated ones at that) that Embperl is the fastest Perl web framework. I was wondering if there's a consensus on the relative speed of the major stable Perl web frameworks, or ideally, some sort of fact-based performance comparisons between implementations of the same sample webapps, or individual functionalities (e.g. session handling or form data processing), etc...?

    Read the article

  • JBoss AS Performance Tuning de Francesco Marchioni, critique par Gomes Rodrigues Antonio

    Bonjour, Vous pouvez trouver sur http://java.developpez.com/livres/?p...L9781849514026 la critique de l'excellent livre "JBoss AS Performance Tuning" [IMG]http://images-eu.amazon.com/images/P/184951402X.01.LZZZZZZZ.jpg[/IMG] Comme il couvre plus que seulement le tuning de JBoss, je préfère mettre cette discussion ici A propos du livre, il couvre la création d'un test de charge avec Jmeter, le tuning de JBoss, le profiling de l'application et de la JVM, de l'OS ... Il se lit plutôt bien et on y trouve pas mal d'informations Si vous avez un avis sur ce livre, je serais intéressé de le connaitre...

    Read the article

  • Performance: recursion vs. iteration in Javascript

    - by mastazi
    I have read recently some articles (e.g. http://dailyjs.com/2012/09/14/functional-programming/) about the functional aspects of Javascript and the relationship between Scheme and Javascript (the latter was influenced by the first, which is a functional language, while the O-O aspects are inherited from Self which is a prototyping-based language). However my question is more specific: I was wondering if there are metrics about the performance of recursion vs. iteration in Javascript. I know that in some languages (where by design iteration performs better) the difference is minimal because the interpreter / compiler converts recursion into iteration, however I guess that probably this is not the case of Javascript since it is, at least partially, a functional language.

    Read the article

  • A hónap könyve: "Achieving Extreme Performance with Oracle Exadata"

    - by Lajos Sárecz
    Luis Moreno Campos ocpdba oracle weblog blogjában találtam a fenti fotót és a hírt, hogy megjelent az elso Oracle Exadata-ról szóló könyv! Már a tartalomjegyzék alapján nagyon ígéretes a könyv. Azt gondolom kötelezo olvasmány mindazoknak, akik használják, használni fogják az Oracle Exadata Database Machine-t, vagy egyszeruen csak érdeklodnek azon Oracle technológiák iránt, melyek annyira kimagasló képességuvé teszik ezt a korszakváltó adatbázis szervert. A könyv az alábbi forrásból érheto el: Achieving Extreme Performance with Oracle Exadata (Osborne ORACLE Press Series) Rick Greenwald Apropó, épp félóra múlva lesz egy érdekes webcast arról, hogyan lehet biztonsággal vegyes terhelést futtatni egy Exadata szerveren. Még lehet regisztrálni!

    Read the article

  • Supercharging the Performance of Your Front-Office Applications @ OOW'12

    - by Sanjeev Sharma
    [Re-posted from here.] You can increase customer satisfaction, brand equity, and ultimately top-line revenue by deploying  Oracle ATG Web Commerce, Oracle WebCenter Sites, Oracle Endeca applications, Oracle’s  Siebel applications, and other front-office applications on Oracle Exalogic, Oracle’s combination  of hardware and software for applications and middleware. Join me (Sanjeev Sharma) and my colleague, Kelly Goetsch, at the following conference session at Oracle Open World to find out how Customer Experience can be transformed with Oracle Exalogic: Session:  CON9421 - Supercharging the Performance of Your Front-Office Applications with Oracle ExalogicDate: Wednesday, 3 Oct, 2012Time: 10:15 am - 11:15 am (PST)Venue: Moscone South (309)

    Read the article

  • Is it possible to simulate load balancers, and how much performance gain they provide?

    - by Lakhlani Prashant
    I have a website which runs on IIS (Asp.net application, some of them are in dotnetnuke also) and we are expecting higher numbers of traffic on some of the sites, so we are planning to add a load-balancer, but before going to do that, we just want to know is it worth to do that? So, I want to know if is it possible to simulate load balancer, and how much performance gain they provide?

    Read the article

  • is it possible to symulate load balancers, and how much performance gain they provide?

    - by Lakhlani Prashant
    I have a website which runs on IIS (Asp.net application, some of them are in dotnetnuke also) and we are expecting higher numbers of traffic on some of the sites, so we are planning to add a load-balancer, but before going to do that, we just want to konw is it worth to do that? so, I want to know if is it possible to simulate load balancer, and how much performance gain they provide?

    Read the article

  • No cache and Google AdSense performance

    - by Luca
    I'm developing a page where I need to avoid JavaScript caching by browser. I've added this header: <?php header('Cache-Control: no-cache, no-store, must-revalidate'); header('Pragma: no-cache'); header('Expires: 0'); ?> After this, browsers didn't cache more JavaScript sorting out the issue, but in the same time I noticed a drop in Google AdSense RPM. Then I removed the added code and now Google AdSense RPM is reaching again a good value. So, how could I avoid JavaScript caching without meddle with AdSense performance?

    Read the article

  • What's the performance penalty of using isKindOfClass in Objective C?

    - by durcicko
    I'm considering introducing: if ([myInstance isKindOfClass:[SomeClass class]]) { do something...} into a piece of code that gets called pretty often. Will I introduce a significant performance penalty? In Objective C, is there a quicker way of assessing whether a given object instance is of certain class type? For example, is the following quicker? (I realize the test is somewhat different) if (myInstance.class == [SomeClass class]) { do something else...}

    Read the article

  • Performance-Based Management Stinks

    - by andyleonard
    Introduction This post is the forty-eighth part of a ramble-rant about the software business. The current posts in this series can be found on the series landing page . This post is about Performance-Based Management (PBM). Almost… In Mere Christianity , C. S. Lewis refutes an argument with the following statement: It has every amiable quality except that of being useful. I feel the same way about PBM. I am a metrics person. I thrive – intellectually, emotionally, and economically – on business intelligence...(read more)

    Read the article

  • google analytics reverse transaction not working with sales performance

    - by prasad maganti
    We have google analytics account and trying to do reverse transaction. We have created a transaction on one date and reverse transaction on some other date. After transaction if we do reverse transaction it disappears from transactions list. Is it the expected behavior or abnormal behavior? But, if we check the same order data in sales performance, the reverse transaction does not reflects on when we created the transaction, it reflecting on when we made reverse transaction date. It should not be do like this. The reverse transaction should affect the same date on when we made transaction date.

    Read the article

  • Naming multi-instance performance counters in .NET

    - by Roger Lipscombe
    Most multiple instance performance counters in Windows seem to automatically(?) have a #n on the end if there's more than one instance with the same name. For example: if, in Perfmon, you look under the Process category, you'll see: ... dwm explorer explorer#1 ... I have two explorer.exe processes, so the second counter has #1 appended to its name. When I attempt to do this in a .NET application: I can create the category, and register the instance (using the PerformanceCounterCategory.Create that takes a CounterCreationDataCollection). I can open the counter for write and write to it. When I open the counter a second time, it opens the same counter. This means that I have two applications fighting over the counters. The documentation for PerformanceCounter.InstanceName states that # is not allowed in the name. So: how do I have multiple-instance performance counters that are actually multiple instance? And where the second (and subsequent) instances get #n appended to the name? That is: I know that I can put the process ID (e.g.) on the instance name. This works, but has the unfortunate side effect that restarting the process results in a new PID, and Perfmon continues monitoring the old counter.

    Read the article

  • Does 3d modeling software *choice used during asset creation affect performance at runtime

    - by user134143
    Does software used to create 3d assets (for game development specifically) have an impact on the efficiency of the program. In other words. Is it possible to reduce the operating footprint of an application merely by utilizing alternative development software during production of 3d assets. If you use two different applications to create a 3 dimensional image of a box, can one of them result in better performance if aspects of the image are identical? Sorry if this question seems vague, I am attempting to get the information I need without causing unnecessary debate over specific software choice. Thank you.

    Read the article

  • PHP and performance

    - by Naif
    I always hear that PHP is for medium and small websites whereas .NET and Java for enterprise applications. My question is about PHP. Why is PHP not a good option for enterprise web applications? Is it because if the web application becomes bigger then PHP will be slower as it is an interpreted language? I know that corporate world will choose .NET or J2EE because of the integration with their products and because of back end services, etc. However, if we just have PHP for building sites and web applications then how can we use it to perform well with big sites? In short, Is there a relationship between the performance of PHP and the size of the website? What are the factors that make PHP not appropriate option for big sites?

    Read the article

  • KVM guest disk performance

    - by Alex
    My KVM guest does max. 200MB/s although the host does easily 700MB/s (Raid 0 with 4 SSDs). Configuration: File-based storage (raw), cache none. Host 24 cores, 96GB ram, Ubuntu 12.04.1 LTS and virt-manager. I suspect the CPU to be the bottleneck (one core goes up during hdparm). Anyone experienced the same or has an explanation ? Edit: one more info: guest is the same as host (Ubuntu 12). Same poor disk performance observed with Windows 2008 R2 and Suse Enterprise Linux (9 or 10 I think). Max 1 guest running.

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >