Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 9/3203 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Load Testing Java Web Application - find TPS / Avg transaction response time

    - by Steve
    I would like to build my own load testing tool in Java with the goal of being able to load test a web application I am building throughout the development cycle. The web application will be receiving server to server HTTP Post requests and I would like to find its starting transaction per second (TPS) capacity along with the avgerage response time. The Post request and response messages will be in XML (I dont' think that's really applicable though :) ). I have written a very simple Java app to send transactions and count how many transactions it was able to send in one second (1000 ms) however I don't think this is the best way to load test. Really what I want is to send any number of transactions at exactly the same time - i.e. 10, 50, 100 etc. Any help would be appreciated! Oh and here is my current test app code: Thread[] t = new Thread[1]; for (int a = 0; a < t.length; a++) { t[a] = new Thread(new MessageLoop()); } startTime = System.currentTimeMillis(); System.out.println(startTime); for (int a = 0; a < t.length; a++) { t[a].start(); } while ((System.currentTimeMillis() - startTime) < 1000 ) { } if ((System.currentTimeMillis() - startTime) > 1000 ) { for (int a = 0; a < t.length; a++) { t[a].interrupt(); } } long endTime = System.currentTimeMillis(); System.out.println(endTime); System.out.println("Total time: " + (endTime - startTime)); System.out.println("Total transactions: " + count); private static class MessageLoop implements Runnable { public void run() { try { //Test Number of transactions while ((System.currentTimeMillis() - startTime) < 1000 ) { // SEND TRANSACTION HERE count++; } } catch (Exception e) { } } }

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Modifying a HTML page to fix several "bugs" add a function to next/previous on a option dropdown

    - by Dennis Sylvian
    SOF, I've got a few problems plaguing me at the moment and am wondering if anyone could assist me with them. I'm trying to get Next Class | Previous Class to act as buttons so that when Next Class is clicked it will go to the next item in the dropdown list and for previous it would go to back one. There used to be a scroll bar that allowed me to scroll the main window left and right, it's missing because (I think it was to do with the scroll left and scroll right function) The footer at the bottom doesn't show correctly on mobile devices; for some reason it appears completely differently to as it does on a computer. The "bar" practically and the Scroll Left and Scroll buttons don't appear at all on mobile devices. The scroll left button is unable to be clicked for some reason, I'm unsure what I've done wrong. Refreshing the page resets the horizontal scroll position to far left (I'm pretty sure this relates to the scroll bar) I want to also find a way so that on mobile devices the the header will not show the placeholder image, however I can't work out what CSS media tag(s) I should be using. Latest: http://jsfiddle.net/pwv7u/ Smaller HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>DATA DATA DATA DATA DATA DATA DATA DATA</title> <style type="text/css"> <!-- @import url("nstyle.css"); --> </style> <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready( function() { for (var i=0;i<($("table").children().length);i++){ if(readCookie(i)) $($($("table").children()[i]).children()[(readCookie(i))]).toggleClass('selected').siblings().removeClass('selected'); } $("tr").click(function(){ $(this).toggleClass('selected').siblings().removeClass('selected'); if(readCookie($(this).parent().index())){ if(readCookie($(this).parent().index())==$(this).index()) eraseCookie($(this).parent().index()); else{ eraseCookie($(this).parent().index()); createCookie($(this).parent().index(),$(this).index(),1); } } else createCookie($(this).parent().index(),$(this).index(),1); }); // gather CLASS info var selector = $('.class-selector').on('change', function(){ var id = this.value; if (id!==''){ scrollToAnchor(id); } }); $('a[id^="CLASS"]').each(function(){ var id = this.id, option = $('<option>',{ value: this.id, text:this.id }); selector.append(option); }); function scrollToAnchor(aid) { var aTag = $("a[id='" + aid + "']"); $('html,body').animate({ scrollTop: aTag.offset().top - 80 }, 1); } $("a.TOPJS").click(function () { scrollToAnchor('TOP'); }); $("a.KEYJS").click(function () { scrollToAnchor('KEY'); }); $("a.def").click(function () { $('#container').animate({ "scrollLeft": "-=204" }, 200); }); $("a.abc").click(function () { $("#container").animate({ "scrollLeft": "+=204" }, 200); }); function createCookie(name,value,days) { var expires; if (days) { var date = new Date(); date.setMilliseconds(0); date.setSeconds(0); date.setMinutes(0); date.setHours(0); date.setDate(date.getDate()+days); expires = "; expires="+date.toGMTString(); } else expires = ""; document.cookie = name+"="+value+expires+"; path=/"; } function readCookie(name) { var nameEQ = name + "="; var ca = document.cookie.split(';'); for(var i=0;i < ca.length;i++) { var c = ca[i]; while (c.charAt(0)==' ') c = c.substring(1,c.length); if (c.indexOf(nameEQ) === 0) return c.substring(nameEQ.length,c.length); } return null; } function eraseCookie(name) { createCookie(name,"",-1); } }); </script> </head> <body> <div id="header_container"> <div id="header"> <a href="http://site.x/" target="_blank"><img src="http://placehold.it/300x80"></a> <select class="class-selector"> <option value="">-select class-</option> </select> <div class="classcycler"> <a href="#TOP"><font color=#EFEFEF>Next Class</font></a> <font color=red>|</font> <a href="#TOP"><font color=#EFEFEF>Previous Class</font></a> </div> <div id="header1"> Semi-Transparent Image <a href="#TOP"><font color=#EFEFEF>Up to Top</font></a> | <a href="#KEY"><font color=#EFEFEF>Down to Key</font></a> </div> </div> </div> <a id="TOP"></a> <div id="container"> <table id="gradient-style"> <tbody> <thead> <tr> <th scope="col"><a id="CLASS1"></a>Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class<br>Test 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class Data 1</th> <th scope="col">Class 1<br>Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1<br>Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1</th> <th scope="col">Class 1 Class 1</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> (data text)</th> <th scope="col">title text</th> <th scope="col">text</th> <th scope="col">text</th> <th scope="col">title text</th> <th scope="col">title text</th> </tr> </thead> <tr class="ft3"><td>testing data</td><td>testing data</td><td>test</td><td>class b</td><td>test4</td><td><div align="left">data</div></td><td><div align="left"> </div></td><td><div align="left"></div></td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><tr> <tr class="f3"><td>test</td><td>test</td><td>test</td><td>class a</td><td>test2</td><td><div align="left"> </div></td><td><div align="left"></div></td><td><div align="left"></div></td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>test</td><tr> <thead> <tr> <th scope="col"><a id="CLASS2"></a>Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class<br>Test 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class Data 2</th> <th scope="col">Class 2<br>Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2<br>Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2</th> <th scope="col">Class 2 Class 2</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> data text</th> <th scope="col">title text<br> (data text)</th> <th scope="col">title text</th> <th scope="col">text</th> <th scope="col">text</th> <th scope="col">title text</th> <th scope="col">title text</th> </tr> </thead> <tr class="ft3"><td>testing data</td><td>testing data</td><td>test</td><td>class f</td><td>test2</td><td><div align="left">data</div></td><td><div align="left"></div></td><td><div align="left">data</div></td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><tr> <tr><td>test</td><td>testing data</td><td>test</td><td>class f</td><td>test4</td><td><div align="left">data</div></td><td><div align="left"></div></td><td><div align="left"></div></td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><tr> <tr class="f3"><td>test</td><td>testing data</td><td>testing data</td><td>class d</td><td>test5</td><td><div align="left">data</div></td><td><div align="left"> </div></td><td><div align="left">data</div></td><td>test</td><td>test</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><tr> <tr><td>testing data</td><td>test</td><td>test</td><td>class f</td><td>test5</td><td><div align="left"></div></td><td><div align="left"></div></td><td><div align="left">data</div></td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>test</td><td>testing data</td><tr> <tr class="f2"><td>test</td><td>test</td><td>testing data</td><td>class a</td><td>test1</td><td><div align="left">data</div></td><td><div align="left"> </div></td><td><div align="left">data</div></td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>test</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>testing data</td><td>test</td><td>testing data</td><td>testing data</td><td>test</td><tr> </tbody> <tfoot> <tr> <th class="alt" colspan="34" scope="col"><a id="KEY"></a><img src="http://placehold.it/300x50"></th> </tr> <tr> <td colspan="34"><em><b>DATA DATA</b> - DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA </em></td> </tr> <tr> <td class="alt" colspan="34"><em><b>DAT DATA</b> - DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA </em></td> </tr> </tfoot> </table> </div> <div id="footer_container"> <div id="footer"> <a href="http://site.x/" target="_blank"><img src="http://placehold.it/300x80"></a> <div class="footleft"> <a class="def" href="javascript: void(0);"><font color="#EFEFEF">Scroll Left</font></a> </div> <div id="footer1"> <font color="darkblue">Semi-Transparent Image</font> <i>Copyright &copy; 2013 <a href="http://site.x/" target="_blank" style="text-decoration: none"><font color=#ADD8E6>site</font></a>.</i> </div> <div id="footer2"> <i>All Rights Reserved.</i> </div> <div class="footright"> <a class="abc" href="javascript: void(0);"><font color="#EFEFEF">Scroll Right</font></a> </div> </div> </div> </body> </html> CSS gradient-style * { white-space: nowrap; } #header .class-selector { top: 10px; left: 20px; position: fixed; } #header .classcycler { top: 45px; left: 20px; position: fixed; font-size:20px; } body { line-height: 1.6em; background-color: #535353; overflow-x: scroll; } #gradient-style { font-family: "Lucida Sans Unicode", "Lucida Grande", Sans-Serif; font-size: 12px; margin: 0px; width: 100%; text-align: center; border-collapse: collapse; } #gradient-style th { font-size: 13px; font-weight: normal; line-height:250%; padding-left: 5px; padding-right: 5px; background: #535353 url('table-images/gradhead.png') repeat-x; border-top: 1px solid #fff; border-bottom: 1px solid #fff; color: #ffffff; } #gradient-style th.alt { font-family: "Times New Roman", Serif; text-align: left; padding: 10px; font-size: 26px; } #gradient-style td { padding-left: 5px; padding-right: 5px; border-bottom: 1px solid #fff; border-left: 1px solid #fff; border-right: 1px solid #fff; color: #00000; border-top: 1px solid #fff; background: #FFF url('table-images/gradback.png') repeat-x; } #gradient-style tr.ft3 td { color: #00000; background: #99cde7 url('table-images/gradoverallstudent.png') repeat-x; font-weight: bold; } #gradient-style tr.f1 td { color: #00000; background: #99cde7 url('table-images/gradbeststudent.png') repeat-x; } #gradient-style tr.f2 td { color: #00000; background: #b7e2b6 url('table-images/gradmostattentedstudent.png') repeat-x; } #gradient-style tr.f3 td { color: #00000; background: #a9cd6c url('table-images/gradleastlatestudtent.png') repeat-x; } #gradient-style tfoot tr td { background: #6FA275; font-size: 12px; color: #000; padding: 10; text-align: left; } #gradient-style tbody tr:hover td, #gradient-style tbody tr.selected td { background: #d0dafd url('table-images/gradhover.png') repeat-x; color: #339; } body { margin: 0; padding: 0; } #header_container { background: #000000 url('table-images/gradhead.png') repeat-x; border: 0px solid #666; height: 80px; left: 0; position: fixed; width: 100%; top: 0; } #header { position: relative; margin: 0 auto; width: 500px; height: 100%; text-align: center; color: #0c0aad; } #header1 { position: absolute; width: 125%; top: 50px; } #container { margin: 0 auto; overflow: auto; padding: 80px 0; width: 100%; } #content { } #footer_container { background: #000000 url('table-images/gradhead.png') repeat-x; border: 0px solid #666; bottom: 0; height: 95px; left: 0; position: fixed; width: 100%; } #footer { position: relative; margin: 0 auto; height: 100%; text-align: center; color: #FFF; } #footer1 { position: absolute; width: 103%; top: 50px; } #footer2 { position: absolute; width: 110%; top: 70px; } #footer .footleft { top: 45px; left: 2%; position: absolute; font-size:20px; } #footer .footright { top: 45px; right: 2%; position: absolute; font-size:20px; }

    Read the article

  • use Jquery load to load content in adiv?

    - by Khalid Omar
    simply i'm doing a test i have a div called test and mvc action in the client controler the view and the controler public string testout() { return DateTime.Now.ToString(); } and i'm using jquery to update the div $("#B1").live("click", function() { $("#test").load("/client/testout"); return false; }); first time a click the bottun i see the date and time in the div test second time i click the botton nothing changed

    Read the article

  • Practical Performance Monitoring and Tuning Event

    - by Andrew Kelly
      For any of you who may be interested or know of someone in the market for a performance Monitoring and Tuning class I have just the ticket for you. It’s a 3 day event that will be held in Atlanta Ga. on January 25th to the 27th 2011. For those of you that know me or have been to my sessions you realize I like to provide more than just classroom theory and like to teach real world and above all practical methodology when it comes to performance in SQL Server. This class covers all the essentials...(read more)

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • SQL SERVER – Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2

    - by Pinal Dave
    This is the second part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In part 1 we have understood what is incremental statistics and now in this second part we will see a simple example of incremental statistics. This blog post is heavily inspired from my friend Balmukund’s must read blog post. If you have partitioned table and lots of data, this feature can be specifically very useful. Prerequisite Here are two things you must know before you start with the demonstrations. AdventureWorks – For the demonstration purpose I have installed AdventureWorks 2012 as an AdventureWorks 2014 in this demonstration. Partitions – You should know how partition works with databases. Setup Script Here is the setup script for creating Partition Function, Scheme, and the Table. We will populate the table based on the SalesOrderDetails table from AdventureWorks. -- Use Database USE AdventureWorks2014 GO -- Create Partition Function CREATE PARTITION FUNCTION IncrStatFn (INT) AS RANGE LEFT FOR VALUES (44000, 54000, 64000, 74000) GO -- Create Partition Scheme CREATE PARTITION SCHEME IncrStatSch AS PARTITION [IncrStatFn] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY]) GO -- Create Table Incremental_Statistics CREATE TABLE [IncrStatTab]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [ModifiedDate] [datetime] NOT NULL) ON IncrStatSch(SalesOrderID) GO -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID < 54000 GO Check Details Now we will check details in the partition table IncrStatSch. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO You will notice that only a few of the partition are filled up with data and remaining all the partitions are empty. Now we will create statistics on the Table on the column SalesOrderID. However, here we will keep adding one more keyword which is INCREMENTAL = ON. Please note this is the new keyword and feature added in SQL Server 2014. It did not exist in earlier versions. -- Create Statistics CREATE STATISTICS IncrStat ON [IncrStatTab] (SalesOrderID) WITH FULLSCAN, INCREMENTAL = ON GO Now we have successfully created statistics let us check the statistical histogram of the table. Now let us once again populate the table with more data. This time the data are entered into a different partition than earlier populated partition. -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID > 54000 GO Let us check the status of the partition once again with following script. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO Statistics Update Now here has the new feature come into action. Previously, if we have to update the statistics, we will have to FULLSCAN the entire table irrespective of which partition got the data. However, in SQL Server 2014 we can just specify which partition we want to update in terms of Statistics. Here is the script for the same. -- Update Statistics Manually UPDATE STATISTICS IncrStatTab (IncrStat) WITH RESAMPLE ON PARTITIONS(3, 4) GO Now let us check the statistics once again. -- Show Statistics DBCC SHOW_STATISTICS('IncrStatTab', IncrStat) WITH HISTOGRAM GO Upon examining statistics histogram, you will notice that now the distribution has changed and there is way more rows in the histogram. Summary The new feature of Incremental Statistics is indeed a boon for the scenario where there are partitions and statistics needs to be updated frequently on the partitions. In earlier version to update statistics one has to do FULLSCAN on the entire table which was wasting too many resources. With the new feature in SQL Server 2014, now only those partitions which are significantly changed can be specified in the script to update statistics. Cleanup You can clean up the database by executing following scripts. -- Clean up DROP TABLE [IncrStatTab] DROP PARTITION SCHEME [IncrStatSch] DROP PARTITION FUNCTION [IncrStatFn] GO Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • How to achieve best performance in DirectX 9.0 while rendering on multiple monitors

    - by Vibhore Tanwer
    I am new to DirectX, and trying to learn best practice. Please suggest what are the best practices for rendering on multiple monitors different things at the same time? how can I boost performance of application? I have gone through this article http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263%28v=vs.85%29.aspx . I am making use of some pixel shaders to achieve some effects. At most 4 effect(4 shader effects) can be applied at same time. What are the best practices to achieve best performance with DirectX 9.0. I read somewhere that DirectX 11 provides support for parallel rendering, but I am not able to get any working sample for DirectX 11.0. Please help me with this, Any help would be of great value. Thanks

    Read the article

  • Will JVisualVM degrade application performance?

    - by rocky
    I have doubts in JVisual VM profiler tool related to performance. I have requirement to implement a JVM Monitoring tool for my enterpise java application. I have gone through some profiling tools in market but all them are having some kind of agent file which we need include in server startup. I have a fear that these client agent will degrade my application performance will more. So I have decided to JVisual VM because this profiler tool comes with JDK itself but before implementing JVisualVM, does anybody faces any issues with JVisualVM profiler tool? As well as, is this safe if I implement in application?

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Programmer performance

    - by RSK
    I am a PHP programmer with 1 year of experience. As I am just starting my career, I am learning a lot of things now. I can say I am a little bit of a perfectionist. When I am assigned a problem I start off by Googling. Then, even when I find a solution, I keep trying for a better one until I find 2-3 options. Then I start learning it and choose the best performing solution. Even though I am learning a lot, this process gets me labeled as a low performer. My questions: As a novice, should I continue to use this learning process and not worry about my performance? Should I focus more on my performance and less on how the code performs?

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Service and/or tool to monitor performance?

    - by chris
    I am seeing wildly different performance from a clients web site, and would like to set up some sort of monitoring. What I'm looking for is a service that will issue requests to a couple of URLs, and report on the time it took to process the page - TTFB and time to download the entire page - that means I need something that will process javascript & css. Are there services like this? I've seen a few that monitor uptime, but they don't seem to report on the overall page performance.

    Read the article

  • Ios Game with many animated Nodes,performance issues

    - by user31929
    I'm working in a large map upside-down game(not tiled map),the map i use is a city. I have to insert many node to create the "life of the city",something like people that cross the streets,cars,etc... Some of this characters are involved in physics and game logic but others are only graphic characters. For what i know the only way i can achive this result is to create each character node with or without physic body and animate each character with a texture atlas. In this way i think that i'll have many performance problems, (the characters will be something like 100/150) even if i'll apply all the performance tips that i know... My question is: with large numbers of characters there another programming pattern that i must follow ? What is the approch of game like simcity,simpsons tapped out for ios,etc... that have so many animation at the same time?

    Read the article

  • Buzzword for "performance-aware" software development

    - by errantlinguist
    There seems to be an overabundance of buzzwords for software development styles and methodologies: Agile development, extreme programming, test-driven development, etc... well, is there any sort of buzzword for "performance-aware" development? By "performance awareness", I don't necessarily mean low-latency or low-level programming, although the former would logically fall under the blanket term I'm looking for. I mean development in which resources are recognised to be finite and so there is a general emphasis on low computational complexity, good resource management, etc. If I was to be snarky, I would say "good programming", but that doesn't seem to get the message across so well...

    Read the article

  • Performance Tuning and Query Optimisation–SQLBits Training Day

    - by simonsabin
    I will be doing a training day at SQLbits in April on Performance Tuning and Query Optimisation. This is the outline for the day. Its going to be an intense day, I look forward to seeing you there. To register go to http://www. sqlbits .com/information/registration.aspx . Places are limited so make sure you register soon. Outline of the day. Most database performance issues are due to a combination of bad queries, bad database design or poor indexing. All of them are related to each other. In this...(read more)

    Read the article

  • load balancer question c# asp.net

    - by Migs
    The place where I work has 2 servers and a load balancer. The setup is horrible since I have to manually make sure both servers have the same files. I know there are ways to automate this but it has not been implemented, hopefully soon (I have no control over this). I wrote an application that collects a bunch of information from a user, then creates a folder named after the email of the user in one of the servers. The problem is that I can't control in which server the folder gets created in, so let say a user goes in.. fills his stuff and his folder gets created in server 1, user goes away for a while and goes back to the site but this time the load balancer throws the user into server 2, now the user does something that needs to be saved into his folder but since it didn't created in this server an error occurs. What can I do about this? any suggestions? Thanks

    Read the article

  • Shrinking TCP Window Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • Shrinking Windows Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • EF4 performance tips and tricks

    - by Will
    I've gotten to that point in one of my projects, and haven't found much information out there. So if you've got some pointers for improving performance in the new Entity Framework 4, please let us know!

    Read the article

  • VB.Net IO performance

    - by CFP
    Having read this page, I can't believe that VB.Net has such a terrible performance when it comes to I/O. Is this still true today? How does the .Net Framework 2.0 perform in terms of I/O (taht's the version I'm targeting)?

    Read the article

  • Set of Tools to optimize the performance in general of SQL Server

    - by Dave
    Hi, I know there are things out there to help to optimize queries, ect... but is there anything else, something like a full package that can scan your database and highlight all the performance issues, naming conventions, tables not properly normalized, etc? I know this is the job of a DBA and if the DBA is good, he shouldn't need a tool like that, but sometimes you start a new job, you get in charge of an existing database and the DB is a mess, so you don't know where to start... Thanks to everyone Dave

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >