Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 8/594 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning Training

    - by pinaldave
    Last 3 days to register for the courses. This is one time offer with big discount. The deadline for the course registration is 5th May, 2010. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 2 days Download the complete PDF brochure. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at pinal “at” SQLAuthority.com for any additional information and clarification. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. Additionally there is special program of SolidQ India Insider. This is only available to first few registrants of the courses only. Read more details about the course here. Read my TechEd India 2010 experience here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server Performance Series Hyderabad / Pune – Nov/Dec 2010

    - by pinaldave
    Just a quick note that SQL Server Performance Tuning and Optimizations Seminar series which I am offering at Hyderabad and Pune are almost all sold out. Read the details of the earlier successful seminar conducted at Colombo, Sri Lanka over here. Hyderabad Nov 27-28, 2010 (Last 3 Seats Left) Best Western Amrutha Castle 5-9-16, Opp. Secretriat, Saifabad, Khairatabad Hyderabad, Andhra Pradesh Pune Dec 04-05, 2010 (Last 6 Seats Left) Location TBA as we are looking for larger capacity room. I promise that this is going to be great fun as this sessions are very different then any usual sessions you have ever attended. This sessions are absolutely interactive and all the attendees will feel part of the event. As larger group are not convenient we are limited this seminars to very small group of people. This way attendees can go to instructors any time and feel connected. This 2-day seminar will cover the best of the best concepts and practices from popular courses offered by Solid Quality Mentors. Instead of learning theory only, the seminar focuses on providing real world experience by using demos and scenarios derived from customer engagements. The seminar is uniquely structured and well-thought-out. Sessions are discussion- based and are designed to be an interactive gateway between the instructor and the participants for an optimal learning experience. The seminar is intended to be immersion-based where participants will have plenty of opportunities to get deeply involved in the concepts presented by the instructor. Agenda of the event To join the seminars drop me an email. My email address is pinal “at” SQLAuthority.com and IndiaInfo “at” SolidQ.com. If you specify SQLAuthority.com in Title, you will avail special discount in overall rates on specified price. Yes, a sure 20% I promise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Poor System performance on my machine running Ubuntu 12.04(Beta 2 updated to the present moment)

    - by Mohammad Kamil Nadeem
    Why is it that my system dies when multitasking(it is happening from 11.10) on Ubuntu11.10(Unity), Kubuntu 11.10(KDE) and Deepin Linux which is based on 11.10(Gnome-Shell) The thing is that I thought with 12.04 I would get performance like I used to get on 11.04 on which everything used to run fine without any lag or hiccups. The same lagging(Browser starts to stutter, increased delay in the launching of dash and applications)is happening on 12.04 http://i.imgur.com/YChKB.png and http://i.imgur.com/uyXLA.png . I believe that my system configuration is sufficient for running Ubuntu as you can check here http://paste.ubuntu.com/929734/ . I had the Google voice and chat plugin installed on 12.04 so someone suggested that I should remove that and see if the performance improves but no such respite(I am having this on multiple operating system based on Ubuntu 11.10 as I have mentioned above). On a friends suggestion Ran Memory Test through Partition Magic and my system passed that fine. One thing more that I would like to know is that why when I have 2Gb Ram and 2.1GB swap does my system starts lag and run poorly when Ram consumption goes 500+. If you require anymore information I will gladly provide it.

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • How to check system performance?

    - by Woltan
    Hi all, I am a new Ubuntu user and really like the look and the features of the OS. However, I have a feeling that the performance could be better. With that I mean: Somehow the scrolling within firefox of sites seems laggy. I do not know how I should measure it but there is a difference. Not that it is unusable but it is aggravating. Java programs are running really slow. As a comparison (I know it is not a fair one), I tried to run a game using wine. The graphic specifications using windows were much higher (1600x1200) with a high level of detail, and in ubuntu with the lowest level of detail 1024x768 was the maximum. (My graphics card is a GeForce GTS 450 with 1gb RAM) Coming to my question: Is there a way to measure the performance of 3D acceleration, java applets, firefox scrolling etc. with a tool and compare it with lets say a windows OS or other users having almost the same hardware? Maybe it is a setup issue where some fundamental drivers are missing or something!? Any help, link, suggestion is appreciated! Cherio Woltan

    Read the article

  • Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1)

    - by user702295
    Hello!   There is a new document available: Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG (Doc ID 1594372.1) Demantra Performance Clustering Factor Out of Order Ratio TABLE_REORG CHECK_REORG The table reorganization can be setup to automatically run in version 7.3.1.5.  In version 12.2.2 we run the TABLE_REORG.CHECK_REORG function at every appserver restart. If the function recommends a reorg then we strongly encourage to reorg the database object.  This is documented in the official docs. In versions 7.3.1.3 and 7.3.1.4, the TABLE_REORG module exists and can be used. It has two main functions that are documented in the Implementation Guide Supplement, Release 7.3, Part No. E26760-03, chapter 4. In short, if you are using version 7.3.1.3 or higher, you can check for the need to run a reorg by doing the following 2 steps: 1. Run TABLE_REORG.CHECK_REORG('T'); 2. Check the table LOG_TABLE_REORG for recommendations If you are on a version before 7.3.1.3, you will need to follow the instructions below to determine if you need to do a manual reorg. How to determine if a table reorg is needed 1. It is strongly encouraged by DEV that You gather statistics on the required table.  The prefered percentage for the gather is 100%. 2. Run the following SQL to evaluate how table reorg might affect Primary Key (PK) based access:   SELECT ui.index_name,trunc((ut.num_rows/ui.clustering_factor)/(ut.num_rows/ut.blocks),2) FROM user_indexes ui, user_tables ut, user_constraints uc WHERE ui.table_name=ut.table_name AND ut.table_name=uc.table_name AND ui.index_name=uc.index_name AND UC.CONSTRAINT_TYPE='P' AND ut.table_name=upper('&enter_table_name');   3. Based on the result: VALUE ABOVE 0.75 - DOES NOT REQUIRE REORG VALUE BETWEEN 0.5 AND 0.75 - REORG IS RECOMMENDED VALUE LOWER THAN 0.5 - IT IS HIGHLY RECOMMENDED TO REORG

    Read the article

  • HEALTH MONITORING IN ASP.NET 3.5

    - by kaleidoscope
    Health monitoring gives you the option of monitoring your application once you have developed and deployed your application. The Health Monitoring system works by recording event information to a specified log source. Health monitoring can be attained by doing adding a few configurations in web.config file. Health Monitoring is split into 5 categories: *EventMappings *BufferModes *Rules *Providers *Profiles. Find the below links for details: http://www.dotnetbips.com/articles/63431cdd-07a2-434f-9681-7ef5c2cf0548.aspx http://msdn.microsoft.com/en-us/library/ms178703(VS.80).aspx   Ranjit, M

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQL server virtual memory usage and performance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238k bytes) whereas the virtual memory returns significantly more - 8388607k bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • Zenoss: Getting SNMP stats over SSH

    - by normalocity
    I have the SSH connection working. I have it successfully modeling the device (Ubuntu Server, in this case). What I can't get to work is the SNMP portion. It sounds like I have to custom add the snmpwalk command when doing monitoring over SSH - in other words, have Zenoss connect via SSH, and then run an arbitrary command agains the client (in this case, an snmpwalk), and then parse the results. What I need help doing is: Add the snmpwalk command to the SSH monitoring Parsing the output and getting the data back into the charts

    Read the article

  • .NET Reflector 7.2 Early Access Build 2 Released: Performance Critical

    - by Bart Read
    I've just posted a write-up of some of the performance tuning I've done to improve .NET Reflector 7.2's start-up time here: http://www.reflector.net/2011/05/net-reflector-7-start-up-time-running-out-of-gas-or-pedal-to-the-metal/ You can get the new build from the .NET Reflector homepage at http://www.reflector.net/. Please remember to give us your feedback in the forum, at http://forums.reflector.net/, using the tags #7.2 and #eap. Technorati Tags: reflector,early access,7.2

    Read the article

  • High Performance Storage Systems for SQL Server

    Rod Colledge turns his pessimistic mindset to storage systems, and describes the best way to configure the storage systems of SQL Servers for both performance and reliability. Even Rod gets a glint in his eye when he then goes on to describe the dazzling speed of solid-state storage, though he is quick to identify the risks.

    Read the article

  • Compute Scalars, Expressions and Execution Plan Performance

    - by Paul White
    The humble Compute Scalar is one of the least well-understood of the execution plan operators, and usually the last place people look for query performance problems. It often appears in execution plans with a very low (or even zero) cost, which goes some way to explaining why people ignore it. Some readers will already know that a Compute Scalar can contain a call to a user-defined function, and that any T-SQL function with a BEGIN…END block in its definition can have truly disastrous consequences...(read more)

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Let the RAM improves performance

    - by user1717079
    I have a low profile machine but with a lot of fast RAM, 4 Gb, which is really an amount of memory that i probably will never use, not even an half, since i just use this machine for coding and browsing the web. The HDD is really slow and so the overall performance are bad when booting, caching or starting new program, I'm wondering if Ubuntu can provide some setting or utility to solve this situation and let my system rely more on the RAM usage.

    Read the article

  • Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    - by user12608550
    Long ago, the prerequisite UNIX performance book was Adrian Cockcroft's 1994 classic, Sun Performance and Tuning: Sparc & Solaris, later updated in 1998 as Java and the Internet. As Solaris evolved to include the invaluable DTrace observability features, new essential performance references have been published, such as Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris (2006)  by McDougal, Mauro, and Gregg, and DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD (2011), also by Mauro and Gregg. Much has occurred in Solaris Land since those books appeared, notably Oracle's acquisition of Sun Microsystems in 2010 and the demise of the OpenSolaris community. But operating system technologies have continued to improve markedly in recent years, driven by stunning advances in multicore processor architecture, virtualization, and the massive scalability requirements of cloud computing. A new performance reference was needed, and I eagerly waited for something that thoroughly covered modern, distributed computing performance issues from the ground up. Well, there's a new classic now, authored yet again by Brendan Gregg, former Solaris kernel engineer at Sun and now Lead Performance Engineer at Joyent. Systems Performance: Enterprise and the Cloud is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour.  It provides thorough definitions of terms, explains performance diagnostic Best Practices and "Worst Practices" (called "anti-methods"), and covers key observability tools including DTrace, SystemTap, and all the traditional UNIX utilities like vmstat, ps, iostat, and many others. The book focuses on operating system performance principles and expands on these with respect to Linux (Ubuntu, Fedora, and CentOS are cited), and to Solaris and its derivatives [1]; it is not directed at any one OS so it is extremely useful as a broad performance reference. The author goes beyond the intricacies of performance analysis and shows how to interpret and visualize statistical information gathered from the observability tools.  It's often difficult to extract understanding from voluminous rows of text output, and techniques are provided to assist with summarizing, visualizing, and interpreting the performance data. Gregg includes myriad useful references from the system performance literature, including a "Who's Who" of contributors to this great body of diagnostic tools and methods. This outstanding book should be required reading for UNIX and Linux system administrators as well as anyone charged with diagnosing OS performance issues.  Moreover, the book can easily serve as a textbook for a graduate level course in operating systems [2]. [1] Solaris 11, of course, and Joyent's SmartOS (developed from OpenSolaris) [2] Gregg has taught system performance seminars for many years; I have also taught such courses...this book would be perfect for the OS component of an advanced CS curriculum.

    Read the article

  • Increase Performance of VS 2010 by using a SSD

    - by System.Data
    After searching on the internet for performance improvements when using Visual Studio 2010 with a solid state hard drive, I heard a lot of different opinions. A lot of people said that there isn't really a benefit when using a SSD, but in contrast others said the exact opposite. I am a bit confused with the contrasting opinions and I cannot really make a decision whether buying a SSD would make a difference. What are your experiences with this issue and which SSD did you use?

    Read the article

  • SCOM, 90 Days In, II. Noise.

    - by merrillaldrich
    Once you get past the basic architecture of a SCOM implementation, and build the servers, and so on, the first real problem is … well, noise. Suddenly (depending on how you deploy) the system will reach out, like marching army ants or a some very clever cybernetic spider and find, and then proceed to yell at you about, every single problem on every server you didn’t know you had. That, of course, is the point. Still, a tool like this is not useful if it doesn’t surface the real problems from the...(read more)

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >