PowerShell Problem Solver: Process Performance Reporting
We’re back once again with a problem scenario about getting average processor time, as well as the top five processes that consume the most CPU time. If this is your first time in this series, you’ll definitely want to go back and review the earlier PowerShell Problem Solver articles. The original question I came across wanted to combine processor and process information into a single report, presumably for a group of remote computers. At least that’s the approach I take: if I can do something for one server, I should be able to do it for 10, 100, or 1000 servers.
PowerShell Processor Article Series:
- PowerShell Problem Solver: Processor Loads
- PowerShell Problem Solver: More Processor Performance
- PowerShell Problem Solver: Process CPU Utilization
- PowerShell Problem Solver: Process Performance Counters
- PowerShell Problem Solver: Process Performance Reporting
- PowerShell Problem Solver: Process Performance For All
After everything we looked at over the last several articles, it seems to me that the best way to get the most accurate information is with performance counters. The added benefit is that we can query multiple counters at the same time. Let’s test with a single remote computer.
From the previous articles, I know we will need these counters.
At this point, I just need enough data to help me build the PowerShell commands.
Because I am collecting process data for all processes, this will include items like Idle, System and _Total that I don't want. So I'll filter those out with a regular expression and group the results using Group-Object.
So far this is what we have. To assemble a final object, I'll need to define some properties. First, let's get the Processor % Time counter and get the average value.
Now let's get the process counters Group property, which will the collection of all the countersamples, group that collection on their InstanceName property, and then calculate the average CPU time. I'm essentially slinging objects through the pipeline to get my desired result.
I can now select my top five processes by average CPU time.
Even though I know what the computername is, let's extract it from the data.
That should be all the raw data I need. All that remains is to assemble the pieces into an object that we can write to the pipeline. At this point there are several possibilities. Let's create an ordered hashtable from the variables so that the property names will be in the order that I define them.
Now I can create a new object using the hashtable for the properties.
The Processes property is a collection of nested objects. This last step really depends on how you plan to consume the results. For example, perhaps you want to construct the object like this:
In this example, I create a custom property for each process. This could come in handy if I wanted to get the number one process. Although again, the property is a nested object. Or I could make each process into a distinct property.
Personally, I think this approach works fine if you are only querying a single computer. But if you were querying multiple servers, PowerShell would not know how to present the object to you because each object would have a different set of properties. When you are creating PowerShell tools you want to write a single type of object to the pipeline. In this situation, technically that is true. But when it comes time to scale out, this won't work. This brings us to the logical conclusion, scaling the process out. But in order for me to properly explain, I'll need a separate article. Plus, your head might be buzzing a bit with the code samples. I encourage you to try them out. Even if you don't need the performance counter data, I'm hoping some of the techniques I've demonstrated will prove useful.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.