Search Results

Search found 773 results on 31 pages for 'percentage'.

Page 7/31 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Working the Chart Percentages

    - by Tim Dexter
    Charting in BIP is such fun, well sometimes it is. Not so much today, at least not for Ron in San Diego. He needed a horizontal bar chart showing values plotted for various test areas with value labels at the end of the bars. Simple enough right? The wrinkle, they were percentage values so he needed to see '56%' not '56'! Still, it should be simple enough but the percentage formatting has a requirement for your values to be in a decimal format i.e. 0.56 not 56.0. 56.0 gets formatted as 5600%. OK, so either pull the values out as decimals or use the div function to divide the values in the chart by 100 e.g. <xsl:value-of select="myval div 100)" /> Now I can use the following the chart XML to format the percentages as I need them:   <Graph ... > ... <MarkerText visible="true"> <Y1ViewFormat> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true"/> </Y1ViewFormat> </MarkerText> ... </Graph>   That gets me the values shown the way I want but the auto axis formatting gets me from 0 >> 1. I now need to go in and add the formatting for the axis too.   <Graph ...> ... <Y1Axis axisMinAutoScaled="false" axisMinValue="0.0" axisMaxAutoScaled="false" axisMaxValue="1.0" majorTickStepAutomatic="true"> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" scaleFactor="SCALEFACTOR_NONE" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true" scaleFactorUsed="true"/> </Y1Axis>   Now I have a chart that's showing the percentage values and formatting axis scale correctly for me too. You can of course mess with the attributes above to get more decimal points on your labels, etc. Happy Charting!

    Read the article

  • Trouble calculating correct decimal digits.

    - by Crath
    I am trying to create a program that will do some simple calculations, but am having trouble with the program not doing the correct math, or placing the decimal correctly, or something. Some other people I asked cannot figure it out either. Here is the code: http://pastie.org/887352 When you enter the following data: Weekly Wage: 500 Raise: 3 Years Employed: 8 It outputs the following data: Year Annual Salary 1 $26000.00 2 $26780.00 3 $27560.00 4 $28340.00 5 $29120.00 6 $29900.00 7 $30680.00 8 $31460.00 And it should be outputting: Year Annual Salary 1 $26000.00 2 $26780.00 3 $27583.40 4 $28410.90 5 $29263.23 6 $30141.13 7 $31045.36 8 $31976.72 Here is the full description of the task: 8.17 ( Pay Raise Calculator Application) Develop an application that computes the amount of money an employee makes each year over a user- specified number of years. Assume the employee receives a pay raise once every year. The user specifies in the application the initial weekly salary, the amount of the raise (in percent per year) and the number of years for which the amounts earned will be calculated. The application should run as shown in Fig. 8.22. in your text. (fig 8.22 is the output i posted above as what my program should be posting) Opening the template source code file. Open the PayRaise.cpp file in your text editor or IDE. Defining variables and prompting the user for input. To store the raise percentage and years of employment that the user inputs, define int variables rate and years, in main after line 12. Also define double variable wage to store the user’s annual wage. Then, insert statements that prompt the user for the raise percentage, years of employment and starting weekly wage. Store the values typed at the keyboard in the rate, years and wage variables, respectively. To find the annual wage, multiply the new wage by 52 (the number of weeks per year) and store the result in wage. Displaying a table header and formatting output. Use the left and setw stream manipulators to display a table header as shown in Fig. 8.22 in your text. The first column should be six characters wide. Then use the fixed and setprecision stream manipulators to format floating- point values with two positions to the left of the decimal point. Writing a for statement header. Insert a for statement. Before the first semicolon in the for statement header, define and initialize the variable counter to 1. Before the second semicolon, enter a loop- continuation condition that will cause the for statement to loop until counter has reached the number of years entered. After the second semicolon, enter the increment of counter so that the for statement executes once for each number of years. Calculating the pay raise. In the body of the for statement, display the value of counter in the first column and the value of wage in the second column. Then calculate the new weekly wage for the following year, and store the resulting value in the wage variable. To do this, add 1 to the percentage increase (be sure to divide the percentage by 100.0 ) and multiply the result by the current value in wage. Save, compile and run the application. Input a raise percentage and a number of years for the wage increase. View the results to ensure that the correct years are displayed and that the future wage results are correct. Close the Command Prompt window. We can not figure it out! Any help would be greatly appreciated, thanks!

    Read the article

  • how to developing "document plagiarism checker" website in asp.net?

    - by user1637402
    i know this website write-check his functionality is uploading a file(PDF,Doc) and check percentage of redundancy between the file uploaded and a lot of websites ,books,researches and after user upload file and result shows that result show redundancy percentage and highlight on copied paragraphs . that paragraphs were repeated in website references when user hover on these highlights the source or references appear to the user to make sure the source he copied from this is explain simply for website functionality can any one help me in analysis for asp.net website has the same functionality and how check between uploaded file and archived files

    Read the article

  • Time series in R

    - by Christian Stade-Schuldt
    Hi, I am tracking my body weight in a spread sheet but I want to improve the experience by using R. I was trying to find some information about time series analysis in R but I was not succesful. The data I have here is in the following format: date - weight - body-fat-percentage - water-percentage e.g. 10/08/09 - 84.30 - 18.20 - 55.3 What I want to do plot weight and exponential moving average against time How can I achieve that?

    Read the article

  • Monitoring DOM Changes in JQuery

    - by user363866
    Is there a way to detect when the disabled attribute of an input changes in JQuery. I want to toggle the style based on the value. I can copy/paste the same enable/disable code for each change event (as I did below) but I was looking for a more generic approach. Can I create a custom event that will monitor the disabled attribute of specified inputs? Example: <style type="text/css">.disabled{ background-color:#dcdcdc; }</style> <fieldset> <legend>Option 1</legend> <input type="radio" name="Group1" id="Radio1" value="Yes" />Yes <input type="radio" name="Group1" id="Radio2" value="No" checked="checked" />No <div id="Group1Fields" style="margin-left: 20px;"> Percentage 1: <input type="text" id="Percentage1" disabled="disabled" /><br /> Percentage 2: <input type="text" id="Percentage2" disabled="disabled" /><br /> </div> </fieldset> <fieldset> <legend>Option 2</legend> <input type="radio" name="Group2" id="Radio3" value="Yes" checked="checked" />Yes <input type="radio" name="Group2" id="Radio4" value="No" />No <div id="Group2Fields" style="margin-left: 20px;"> Percentage 1: <input type="text" id="Text1" /><br /> Percentage 2: <input type="text" id="Text2" /><br /> </div> </fieldset> <script type="text/javascript"> $(document).ready(function () { //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); $("input[name='Group1']").change(function () { var disabled = ($(this).val() == "No") ? "disabled" : ""; $("#Group1Fields input").attr("disabled", disabled); //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); //remove disabled style to all enabled controls $("input:not(:disabled)").removeClass("disabled"); }); $("input[name='Group2']").change(function () { var disabled = ($(this).val() == "No") ? "disabled" : ""; $("#Group2Fields input").attr("disabled", disabled); //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); //remove disabled style to all enabled controls $("input:not(:disabled)").removeClass("disabled"); }); }); </script>

    Read the article

  • QT widget for on WindowsXP

    - by Surjya Narayana Padhi
    Hi Geeks, I need to create a widget which shows battery status(in percentage) inside my qt application. Can anybody suggest me how to get the winXP api to know the battery status. Then as the api will return the percentage I will display on my widget....

    Read the article

  • Rails - before_filter than includes updated object

    - by Sam
    I have a before filter than calculates a percentage that needs to include the object that is being updated. Is there a one liner in rails that takes care of this? for example and this is totaly made up: Object.find(:all, :include = :updated_object) Currently I'm sending the object that is getting updated to the definition that calculates the percentage and that works but its making things messy.

    Read the article

  • computed column with aggregate function

    - by Kindson
    I have seven columns in my table: hours, weight, status, total_hours, total_weight and percentage total_weight = weight where status = 'X' total_hours = hours where status = 'X' percentage = total_hours/sum(weight) sum(weight) is an aggregate function I would like to specify formula to generate the three computed columns. What do i do?

    Read the article

  • My update query executes but doesn't update

    - by Kindson
    I have this update query. UPDATE production_shr_01 SET total_hours = hours, total_weight = weight, percentage = total_hours / 7893.3 WHERE (status = 'X') The query executes fine but the problem is that when this query executes, it doesn't update the percentage field. What might be the problem?

    Read the article

  • Rails - before_save that includes updated object

    - by Sam
    I have a before_save that calculates a percentage that needs to include the object that is being updated. Is there a one-liner in Rails that takes care of this? for example and this is totally made up: Object.find(:all, :include => :updated_object) Currently I'm sending the object that is getting updated to the definition that calculates the percentage and that works but it's making things messy.

    Read the article

  • How to understand the memory usage and load average in linux server

    - by Tim
    Hi, I am using a linux server which has 128GB of memory and 24 cores. I use top to see how much it is used. Its output is pasted at the end of the post. Here are two questions: (1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it? (2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")? Thanks and regards! Edit: Thanks! I also really like to hear some rough numbers based on used percentage of memory to determine if a server is heavily loaded, since I once became the one who cramed the server without understanding the current load. Is swap regarded as almost the same as memory? For example, when memory and swap are almost of same size, if the memory is almost running out but the swap is still largely free, may I just view it as if the used percentage of memory + swap is still not high and run other new processes? How would you consider together CPU or memory (or memory + swap) usage? Do you become worried if either of them reaches too high or both? Output of top: $ top top - 12:45:33 up 19 days, 23:11, 18 users, load average: 14.04, 14.02, 14.00 Tasks: 484 total, 12 running, 472 sleeping, 0 stopped, 0 zombie Cpu(s): 36.7%us, 19.7%sy, 0.0%ni, 43.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers Swap: 63111312k total, 500556k used, 62610756k free, 124437752k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6529 sanchez 18 -2 1075m 219m 13m S 100 0.2 13760:23 MATLAB 13210 timothy 18 -2 48336 37m 1216 R 100 0.0 3:56.75 absurdity 13888 timothy 18 -2 48336 37m 1204 R 100 0.0 2:04.89 absurdity 14542 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.34 absurdity 14544 timothy 18 -2 2888 2076 400 R 100 0.0 1:06.14 gatherData 6183 sanchez 18 -2 1133m 195m 13m S 100 0.2 13676:04 MATLAB 6795 sanchez 18 -2 1079m 210m 13m S 100 0.2 13734:26 MATLAB 10178 timothy 18 -2 48336 37m 1204 R 100 0.0 11:33.93 absurdity 12438 timothy 18 -2 48336 37m 1216 R 100 0.0 5:38.17 absurdity 13661 timothy 18 -2 48336 37m 1216 R 100 0.0 2:44.13 absurdity 14098 timothy 18 -2 48336 37m 1204 R 100 0.0 1:58.31 absurdity 14335 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.93 absurdity 14765 timothy 18 -2 48336 37m 1196 R 99 0.0 0:32.57 absurdity 13445 timothy 18 -2 48336 37m 1216 R 99 0.0 3:01.37 absurdity 28990 root 20 0 0 0 0 S 2 0.0 65:50.21 pdflush 12141 tim 18 -2 19380 1660 1024 R 1 0.0 0:04.04 top 1240 root 15 -5 0 0 0 S 0 0.0 16:07.11 kjournald 9019 root 20 0 296m 4460 2616 S 0 0.0 82:19.51 kdm_greet 1 root 20 0 4028 728 592 S 0 0.0 0:03.11 init 2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0 0.0 0:01.01 migration/0 4 root 15 -5 0 0 0 S 0 0.0 0:08.13 ksoftirqd/0 5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0 0.0 17:27.31 migration/1 7 root 15 -5 0 0 0 S 0 0.0 0:01.21 ksoftirqd/1 8 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT -5 0 0 0 S 0 0.0 10:02.56 migration/2 10 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/2 11 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/2 12 root RT -5 0 0 0 S 0 0.0 4:29.53 migration/3 13 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/3

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Calculate total batch upload transfer percent with limited information

    - by GONeale
    Hi there, I have a system which uploads to a server file by file and displays a progress bar on file upload progress, then underneath a second progress bar which I want to indicate percentage of batch complete across all files queued to upload. Information and algorithms I can work out are: Bytes Sent / Total Bytes To Send = First progress bar (eg. 512KB of 1024KB (50%)) That works fine. However supposing I have two other files left to upload, but both file sizes are unknown (as this is only known once the file is about to commence upload, at which point it is compressed and file size is determined) how would I go about making my third progress bar? I didn't think this would be possible as I would need "Total Bytes Sent" / "Total Bytes To Send", to replicate the logic of my first progress bar on a larger scale, however I did get a version working: "Current file number we are on" / "total number of files to send" returning the percentage through the batch, however obviously will not incrementally update and it's pretty crude. So on further thinking I thought if I could incorporate the current file % with this algorithm I could perhaps get the correct progress percentage of my batch's current point. I tried this algorithm, but alas to no such avail (sorry to any math heads, it's probably quite apparent why it won't work) ("Current file number we are on" / "total number of files to send") * ("Bytes Sent" / "Total Bytes To Send") For example I thought I was on the right track when testing with this example: 2/3 (2nd of 3rd file) = 66% (this is right so far) but then when I added * 0.20 (for indicating only 20% of 2nd file has uploaded) we went back to 13%. What I need is only a little over 33%! I did try the inverse at 0.80 and a (2/3 * (2/3 * 0.2)) Can this be done without knowing entire bytes in batch to upload? Please help! Thank you!

    Read the article

  • javascript setTimeout function out of scope.

    - by Keyo
    I am trying to call showUpload(); from within two setTimeouts. Neither works. It seems to be out of scope and I'm not sure why. I tried this.showUpload() which didn't work either. $(document).ready(function(){ var progress_key = $('#progress_key').val(); // this sets up the progress bar $('#uploadform').submit(function() { setTimeout("showUpload()",1500); $("#progressbar").progressbar({ value:0}).fadeIn(); }); // uses ajax to poll the uploadprogress.php page with the id // deserializes the json string, and computes the percentage (integer) // update the jQuery progress bar // sets a timer for the next poll in 750ms function showUpload() { $.get("/myid/videos/uploadprogress/" + progress_key, function(data) { if (!data) return; var response; eval ("response = " + data); if (!response) return; var percentage = Math.floor(100 * parseInt(response['bytes_uploaded']) / parseInt(response['bytes_total'])); $("#progressbar").progressbar({ value:percentage}) }); setTimeout("showUpload()", 750); } }); Thank you for your time.

    Read the article

  • NSTimer to smooth out playback position

    - by Michael
    I have an audio player and I want to show the current time of the the playback. I'm using a custom play class. The app downloads the mp3 to a file then plays from the file when 5% has been downloaded. I have a progress view update as the file plays and update a label on each call to the progress view. However, this is jerky... sometimes even going backward a digit or two. I was considering using an NSTimer to smooth things out. I would be fired every second to a method and pass the percentage played figure to the method then update the label. First, does this seem reasonable? Second, how do I pass the percentage (a float) over to the target of the timer. Right now I am putting the percent played into a dictionary but this seems less than optimal. This is what is called update the progress bar: -(void)updateAudioProgress:(Percentage)percent { audio = percent; if (!seekChanging) slider.value = percent; NSMutableDictionary *myDictionary = [[NSMutableDictionary alloc] init]; [myDictionary setValue:[NSNumber numberWithFloat:percent] forKey:@"myPercent"]; [NSTimer scheduledTimerWithTimeInterval:5 target:self selector:@selector(myTimerMethod:) userInfo:myDictionary repeats:YES]; [myDictionary release]; } This is called first after 5 seconds but then updates each time the method is called. As always, comments and pointers appreciated.

    Read the article

  • simplify a preload image function in jQuery

    - by robertdd
    after i spent 2 day searching a preload function in jQueryi with no succes, i manage to do this: after i upload the image on server, in a list i get this: <li id="upimagesDYGONI"> <div class="percentage">100%</div> <div class="uploadifyProgress"> <div align=" center" class="uploadifyProgressBar" id="upimagesDYGONIProgressBar" style="width: 100%;"> <!--Progress Bar--> </div> </div> </li> with this function, i preload the image: $.getlastimage = function(id) { $.getJSON('operations.php', {'operation':'getli', 'id':id,}, function(lastimg){ $("#upimages" + id + " .percentage").text('processing'); $("#upimages" + id).append('<a href=""><img id="' + id + '" src="" alt="" /></a>') .parent().attr({"href":"uploads/"+ lastimg +'?'+ (new Date()).getTime()}); $("#"+id).hide().attr({"src":"uploads/"+ lastimg +'?'+ (new Date()).getTime(), "alt":lastimg}) .load(function() { $(this).show(); $("#upimages" + id + " .percentage").remove(); $("#upimages" + id + " .uploadifyProgress").remove(); }); }); } })(jQuery) and i get this: <li id="upimagesLRBHYN" style="" class=""> <a href="uploads/0002.jpg?1271901177027"> <img alt="0002.jpg" src="uploads/0002.jpg?1271901177028" id="LRBHYN" style="display: block;"> </a> </li> how i can simplify the function? i want to use the full power of jQuery!! any idea?

    Read the article

  • Have you considered doing revenue sharing to fund development of a mobile app? How would you do it?

    - by Brennan
    I am looking to build multiple mobile apps which leverage existing content and resources by enabling these mobile apps with web services. I will duplication much of the same features which are also in place and add more features that are possible on a mobile device like address book, maps and calendar integration to make the service much more useful. To fund these projects I see that I have 2 options. First I could simply quote them for the project based on my hourly rate and the estimate in hours that I will take the to complete the job. That may be a high number. The second option would be to do shared revenue with ads placed in the app. I could then take a percentage of any revenue that is generated from the app. There is also a hybrid where I might charge for a percentage of the estimated quote and then take a percentage of the revenue sharing. So my question is how much should I propose for the revenue sharing? Should it be 30%? Or maybe I should make it 70% up to a point that a certain dollar amount is reached? And should the revenue sharing agreement be for 12 months, 24 months or more? Should I include in the proposal an agreement that they will help promote this app with their content and resources? Ultimately this system will benefit both sides because it extends their reach into the mobile space instead of where they are currently with just print and web. I have tried to find some examples with a few Google searches but I keep hitting content about the Google and Apple revenue sharing models. I would like to get some solid examples that are working to compare against so that my proposal do build these apps is not completely off base.

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • Classes and Objects in C++

    - by anurag18294
    class anurag { private: int rollno; char name[50]; int marks; float percen; void percentage(int num) { percen=(num/500)*100; } public: void getdata(void) { cout<<"\n\nEnter the name of the student:"; gets(name); cout<<"\n\nEnter the roll no: and the marks:"; cin>>rollno>>marks; percentage(marks); } void display(void) { cout<<"\n\nThe name of the student is:"; cout.write(name,50); cout<<"\n\nThe roll no: of the student is:"; cout<<rollno; cout<<"\n\n The marks obtained is:"<<marks; cout<<"\n\nThe percentage is:"<<percen; }}; void main() { clrscr(); anurag F; F.getdata(); F.display(); getch(); } why the following code is not giving the desired output?

    Read the article

  • How to implement a multi-threaded asynchronous operation?

    - by drowneath
    Here's how my current approach looks like: // Somewhere in a UI class // Called when a button called "Start" clicked MyWindow::OnStartClicked(Event &sender) { _thread = new boost::thread(boost::bind(&MyWindow::WorkToDo, this)); } MyWindow::WorkToDo() { for(int i = 1; i < 10000000; i++) { int percentage = (int)((float)i / 100000000.f); _progressBar->SetValue(percentage); _statusText->SetText("Working... %d%%", percentage); printf("Pretend to do something useful...\n"); } } // Called on every frame MyWindow::OnUpdate() { if(_thread != 0 && _thread->timed_join(boost::posix_time::seconds(0)) { _progressBar->SetValue(100); _statusText->SetText("Completed!"); delete _thread; _thread = 0; } } But I'm afraid this is far from safe since I keep getting unhandled exception at the end of the program execution. I basically want to separate a heavy task into another thread without blocking the GUI part.

    Read the article

  • High Tq values for HAProxy

    - by Will
    I just took over administration of a new environment. A known issue is that the environment is known for high response times (20+ seconds), so I figured I'd turn on haproxy logging and see what is going on. I figured I'd see slow load times in the app servers, but I'm actually seeing high Tq values in HAProxy. The HAProxy is on EC2 and is NOT behind ELB. Sep 5 14:22:00 haproxy-apps01 haproxy[24695]: 76.14.153.221:3371 [05/Sep/2012:14:21:49.780] http-in default_apps/fe04-c 10936/0/0/55/10991 200 488 - - ---- 111/111/0/1/0 0/0 "GET /event_times/next?callback=jQuery170189312373075111_1346854917562&_=1346854918453 HTTP/1.1" As you can see, this one has a Tq of about 10 seconds. Not all the Tq's are high (1+ seconds), but a good percentage of them are (approx 35%). Normally when I see this behavior, I'd expect there to be network issues, but this is an incredibly high percentage of visitors to be having an issue like this, so I'm wondering if anybody has seen this or have any hints on diagnosing if the issue could possibly be on this box?

    Read the article

  • How to calculate CPU % based on raw CPU ticks in SNMP

    - by bjeanes
    According to http://net-snmp.sourceforge.net/docs/mibs/ucdavis.html#scalar_notcurrent ssCpuUser, ssCpuSystem, ssCpuIdle, etc are deprecated in favor of the raw variants (ssCpuRawUser, etc). The former values (which don't cover things like nice, wait, kernel, interrupt, etc) returned a percentage value: The percentage of CPU time spent processing user-level code, calculated over the last minute. This object has been deprecated in favour of 'ssCpuRawUser(50)', which can be used to calculate the same metric, but over any desired time period. The raw values return the "raw" number of ticks the CPU spent: The number of 'ticks' (typically 1/100s) spent processing user-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors). My question is: how do you turn the number of ticks into percentage? That is, how do you know how many ticks per second (it's typically — which implies not always — 1/100s, which either means 1 every 100 seconds or that a tick represents 1/100th of a second). I imagine you also need to know how many CPUs there are or you need to fetch all the CPU values to add them all together. I can't seem to find a MIB that gives you an integer value for # of CPUs which makes the former route awkward. The latter route seems unreliable because some of the numbers overlap (sometimes). For example, ssCpuRawWait has the following warning: This object will not be implemented on hosts where the underlying operating system does not measure this particular CPU metric. This time may also be included within the 'ssCpuRawSystem(52)' counter. Some help would be appreciated. Everywhere seems to just say that % is deprecated because it can be derived, but I haven't found anywhere that shows the official standard way to perform this derivation. The second component is that these "ticks" seem to be cumulative instead of over some time period. How do I sample values over some time period? The ultimate information I want is: % of user, system, idle, nice (and ideally steal, though there doesn't seem to be a standard MIB for this) "currently" (over the last 1-60s would probably be sufficient, with a preference for smaller time spans).

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >