Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time.
Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like.
Some Ideal Attributes of a Solution
Causes no noticeable performance hit on the server being monitored
Has a very high precision of measurement
Does not store useless or redundant information
Is easy to query (lends itself to gathering/displaying useful information)
Lends itself to being graphed easily
Is accurate
Is elegant
Primary Questions
1) What is a good design/method/scheme for triggering the storing of statistics?
2) What is a good database design for how to actually store the data?
Example answers...that are sort of vague and lame...
1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server.
2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1.
3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items.
Etc.
In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!