PowerShell Problem Solver: Process Performance Counters
Over the course of several articles, we’ve been exploring a variety of techniques and tools for getting processor and process utilization values. Last time we explored ways to find out home much of the CPU a process is consuming. I left off promising a discussion of relevant performance counters, and that’s what we are going to cover today. We’ll be using the Get-Counter cmdlet, so take a moment to read help and examples.
PowerShell Processor Article Series:
- PowerShell Problem Solver: Processor Loads
- PowerShell Problem Solver: More Processor Performance
- PowerShell Problem Solver: Process CPU Utilization
- PowerShell Problem Solver: Process Performance Counters
- PowerShell Problem Solver: Process Performance Reporting
- PowerShell Problem Solver: Process Performance For All
The first step is to identify the available counters.
We can drill down to the Paths to see specific counters.
The asterisk (*) can be replaced with specific process names, which I can verify by looking at paths with instances. For the task at hand, I think the first counter in the list is what we want. Let's try it out locally.
Some system processes are protected so you may see an error. This counter gets all processes, but you can narrow it down to a specific instance.
For our purposes, we want all processes. Well, almost all. We don't need to see values for Idle, System and _Total, so those will need to be filtered out. And we want to get the cooked value separately.
Because I'm running PowerShell 4.0, I can take advantage of the new Where() method that performs much faster. Here's a streamlined version of the previous command.
The command I have been using gets a one-time counter, and I want to get an average of processor time. I can run Get-Counter and collect a number of samples.
This will get 12 samples, one every five seconds, which comes out to about a 1 minute sampling of all processes. Once I have the results, I will need to filter out what I don't want. Because I ultimately want averages per process, I'll go ahead and group the results on the instance name.
You'll notice that in some cases I have more than 12 instances. That is because there are multiple instances of some processes and I can't find anyway with Get-Counter to coordinate the results with a process ID. So all of the svchost processes are lumped together. The next step is to process the grouped data so that I can get the computername, the process name, and an average value.
If you wanted to use a single one-liner, you could try something like this:
Now that we have this working locally let's query a bunch of remote servers.
In this example, I am getting 60 samples total, one every five seconds. I will eventually need to break the results down by server so I'm going to group all of the counter data by the computername, which I'm extracting from the Path property using a scriptblock with Group-Object. Now for the tricky part. I have to process the counter samples for each computer in order to get the top five by average percent processor time.
Because the command uses a number of pipelined expressions, the value of $_ changes. That is why in the beginning of the ForEach-Object I am defining a computername variable so that I can use it later in the output. The end result is something like this: I could export this to a CSV file or create an HTML report. If you want something a bit easier to read, you can use Format-Table. I'll also add some formatting for the AvgCPUTime value to make it easier to read.
If you recall the original problem at the beginning of the series, there is one final step and that is to put everything together. We'll tackle that next next.
Say Goodbye to Traditional PC Lifecycle Management
Traditional IT tools, including Microsoft SCCM, Ghost Solution Suite, and KACE, often require considerable custom configurations by T3 technicians (an expensive and often elusive IT resource) to enable management of a hybrid onsite + remote workforce. In many cases, even with the best resources, organizations are finding that these on-premise tools simply cannot support remote endpoints consistently and reliably due to infrastructure limitations.