It’s the classic question faced by everyone in Information Services. I know how to do this and I could build software to do it, but I’m a lazy b_____d so I’d rather just pick up a piece of software that does it for me. I love working for large companies because I can usually get them to purchase stuff so that I can loll around stuffing my face all day instead of doing actual work. Unfortunately, not everyone can afford to pick up Microsoft’s Operations Manager or Idera’s Diagnostic Manager. But you still need to monitor your servers. With buy eliminated, that leaves build.
Which, is where this excellent blog post by Laerte Junior comes in. He lays out how to build a wrapper around calls to get Performan Counter information using PowerShell. It’s a pretty slick and worth a read. Becuase the thing is, when you need to build your monitoring products, you want to use a language that you know. Since everyone is learning Powershell (right?) this provides a good foundation for beginning your monitoring project.
Great find Grant. Will be playing with that come Monday morning. Even though we use Spotlight and Foglight, always find myself coming back to other faster more customizable solutions.
Grant,
Thanks for the link to Laerte’s solution.
After reading it, I had some further thoughts on “buy versus build versus reuse”.
While I admire the inventiveness in his PowerShell solution, the results from his solution appear very similar to the PerfMon collection files that can be created using either the PerfMon GUI tool (interactively creating and scheduling a batch collection to a CSV or TSV text file) or the LogMan command line utility and parameter files to create and manage batch PerfMon collection files.
I have successfully used these existing tools for creating and managing PerfMon collection files, as they require little effort and no development (to just collect the data into flat files). After having collected PerfMon data on several systems this way, I had first planned to load the data into a SQL DB (as Laerte had mentioned) for subsequent analysis, reporting, charting, etc., but the wide variation of column types, meanings, etc. from system to system and across different collection files drove me towards a different solution.
I ended up creating a solution based on Microsoft’s LogParser to generate performance charts directly from the PerfMon collection flat files (actually, post-processed flat files that resolve minor issues with collected PerfMon data – processing that might also be needed for PowerShell collected PerfMon data). That’s correct – directly process flat files and no SQL DB needed (for the type of collections and analysis I have needed so far)! The resulting solution has greater flexibility than I first thought possible, using LogParser’s SQL syntax against the flat file data without being encumbered by first loading the data in a DB (treating the PerfMon collection flat file like a very wide single table – hundreds of columns, if needed).
As an example, a PerfMon collection tab-separated (TSV) text file for a single day of a single system with one minute sampling across a large number of counters (close to 1,000) is only 10-20 MB.
So while I have a “build†solution (versus “buyâ€), I have also “reused†the capabilities of PerfMon / LogMan rather than reinventing them. The experience of developing this solution taught me a lot about PerfMon data in general, the collection process, the available tools, preferred operational processes for managing PerfMon collection files, and desired forms of output and results. I consider my solution “good enough†for now, but have more ideas on how I would like to improve it over time.
On the other hand, I later found a project on CodePlex called PAL – Performance Analysis of Logs (Link: http://pal.codeplex.com). It also uses LogParser for its processing of PerfMon log files, but in ways different than my solution. I may eventually look more towards using a solution like PAL.
After I read about PAL, I first thought “Why did I spend the time and effort to create my custom solution?†Instead of thinking of my solution as a throw-away solution, I now view it as a great experience builder (learning PerfMon, performance analysis, etc.) and as a solution that provides some similar and some different forms of results than PAL.
We all come from different backgrounds and experience levels in specific subjects – whether the subjects are performance analysis, DBs, application development, etc. I think that Laerte learned more about PerfMon and performance analysis for his efforts in his solution (just as I did from my efforts), and others will learn as well from his post.
I have also learned that the value of a solution continues to increase as it evolves from simple scripts to more powerful tools to an application.
I hope to present on my solution some day (a presentation and demonstration is in the works for a local user group meeting, to start), towards similar goals.
I would be interested in hearing other’s experiences on “buy versus build versus reuse”.
Thanks,
Scott R.
Holy Cow! That’s a great blog post. You should copy it back out of my comments and put it up on your blog.
That said, I agree with all your points. I think it all comes down to the level of customization that you want to incorporate into the work you do when you decide not to buy. I’m frequently surprised at just how much work people will put in when a free, built-in, utility already exists and does most of what you need. I still like Laerte’s example just because it shows that there is more than one way to get things done and you don’t need to run out & drop X amount of cash to get what you want.
First sorry for giving a reply on his blog Grant, but I believe that despite agreeing with some things others have been forgotten by Scott.
I build this module functions to ease my life. Sure it looks like the Perfmom, but I believe the command line is much more flexible than the IDE.
One of the things that made me frustrated, is that even using the IDE Perfmom, data collection in several remote servers at the same time was painful. Turn perfmom local and remote servers to perhaps 3 or 4 is even acceptable. But when we started talking about tens, the IDE Perfmom is not feasible. It is heavy and hangs all the time. In the case of solution with Powershell, I can eg starter collection for 20 servers in the background with one command line. And with one more line upload these data pro SQL Server.
It’s all about flexibility, scalability. Of course when we speak of hundreds of servers, other solutions may be more feasible. But today I have friends who used the performance monitor local and generated csv files on each server . Then they had a stored procedure done to upload these CSV files into SQL Server. A procedure relatively large and complex. Today with a line in your desktop they do the same thing. I believe that every environment has its own style, and we have to find the most convenient way to monitor them. Whether Powertshell, Perfmom or third party software
Hello just wanted to give you a brief heads up and let you know a few of the pictures aren’t loading properly. I’m not sure why but I think its a linking issue. I’ve tried it in two different web browsers and both show the same outcome.
Thanks, yes, it’s because I moved the blog. I’m still recovering from it.