Adjusting the ConfigMgr Script Execution Timeout (pre 1810)

, , , , ,

In ConfigMgr, scripts that execute default to having an execution timeout of 60 seconds.  Normally this timeout is fine but you may run into situations where scripts run long and clients start receiving the following error in DcmWmiProvider.log.

In-line script execution time-out…
Failed to process CScriptProvider::PutInstanceAsync.
The script execution has timed out. (Error: 87D00321; Source: CCM)

ConfigMgr 1810 introduced the option to set the script timeout, if you are not up to 1810 yet and need to adjust the timeout then you are in the right place.  User raphael at posted a blog on how to adjust the timeout, this is a good script for a single site infrastructure but the client setting does not flow down from a CAS to primary sites.  For this reason I adapted his original script to handle a multi-level site hierarchy.  Update the variables at the top of the script as required and run it to set the script execution timeout.

$SiteCode = "CM1"
$SiteServer = ""
$ScriptTimeout = 120

$CCMAgents = (gwmi -Namespace root\sms\site_$SiteCode -Class SMS_SCI_ClientComp -ComputerName $SiteServer | where {$_.ClientComponentName -eq 'Configuration Management Agent'})

foreach ($CCMAgent in $CCMAgents)
    $props = $CCMAgent.Props
    for ($i = 0; $i -lt $props.count; $i++)
        if ($props[$i].PropertyName -eq "ScriptExecutionTimeout")
            $props[$i].Value = $ScriptTimeout
    $CCMAgent.Props = $Props

ALSO CHECK : Co-management - Multiple Pilot Policies

Performance Counter Script

, ,

This particular Counter script started out as a project tasking.   The request was simple, if not vague: find out why our computers are so slow.  Rather than remoting into a bunch of random computers and then taking screen shots of Task Manager, I came up with the following script: Download

I’ll walk through some basic examples of what the parameters do, and then delve into how they do it.

Example 1:

.\LocalCounterScript.ps1 -TargetComputer TEST-BOX-1 -MonitorDuration 5 -SamplingRate 5

This example runs the Counter script against a remote system, it runs for five minutes, and it takes one sample every 5 seconds.  We’re not looking for any specific processes here, so it’s going to capture from the master list embedded in the script itself.   When it finishes running, the script will not only dump a CSV report containing the highest (or lowest where relevant) values, but it will also dump the raw data it collected.

Example 2:

.\LocalCounterScript.ps1 -TargetComputer TEST-BOX-2 -TargetProcess TaniumClient -logPath C:\Temp

This time, we’re still going into a remote system, but we’re specifically looking for the TaniumClient process.  While we are only capturing that one application process, the script will always capture basic system performance counters such as hard drive activity, memory usage, and CPU usage.  Also, we’ve redirected the log file to a new location.  Because we didn’t specify a monitor duration or frequency, it falls back to the script defaults of one minute duration with counters captured every two seconds.

Alternatively, you can run this script against a list of systems via a simple loop. Combined with the start-job cmdlet, you can quickly collect performance data from a wide range of systems.


But How Does It Work

Starting with the parameter block, here are the basic items I’ve decided we would want to change:

param (
        [double]$monitorDuration = 1,
        [int]$samplingRate = 2,
        [string]$logPath = "\\testServer\uploads\synack\counterscript\",
        [string]$targetProcess = "BLANK"  # I put this in here if you want to only track a specific process (i.e collecting Tanium Report data so you only want to monitor the TaniumClient process). It also collects the generic system ones

The first thing we might want to change is the target computer. Since I was frequently running this script against a computer I was already logged into, the default value is left as the local computer.  While you can run this with “hostname” or even nothing at all, but because this generates a report at the end, having the actual computer name matters.

Monitor duration and sampling rate are pretty much what they sound like: how long do you want to monitor the computer and how frequently are you checking your counters.

Log Path is similarly self explanatory.  This is a root folder more than a specific name since if you’re doing multiple computers at a time, I didn’t want to specify a log file name.

Target Process is where you specify what process you’re trying to monitor. If you don’t specify anything, it will monitor everything on the master list I’ve hard coded into the script.  This was based on a list of all the processes we thought would significantly impact system performance.  If you do specify a process, it monitors that process as well as some basic parameters like hard drive, memory, and CPU usage.

# Variable Declaration
$masterCountersList = @() #this is the array that will hold the GenericCounters combined with the process specific counters so I can run them all at once
$startTime = (Get-Date -format "yyyy-MM-d_hhmmss")
$GenericCounters =
"\\$TargetComputer\PhysicalDisk(*)\% Idle Time",
"\\$TargetComputer\Memory\% committed bytes in use",
"\\$TargetComputer\Memory\Available MBytes",
"\\$TargetComputer\Memory\Free System Page Table Entries",
"\\$TargetComputer\Memory\Pool Paged Bytes",
"\\$TargetComputer\Memory\Pool Nonpaged Bytes",
"\\$TargetComputer\Processor(_total)\% processor time",
"\\$TargetComputer\Processor(_total)\% user time" # these are all the counters that aren't process specific

This section is all of the basic counters I want to track whether I’m after a specific process or not.  Also, this sets our $startTime variable, which we’re using for logging purposes.

After the generic counters comes the massive list of counters we were tracking on a generic system scan.  I’m not going to paste them here, but they include everything from antivirus to Tanium client and SCCM.

# This turns the process names into full counter paths so we don't have to enter three of them per each process we add later down the line
function Return-CounterArray ($processName)
      $counters = @()
      $counters += "\\$TargetComputer\Process($processName)\Handle Count"
      $counters += "\\$TargetComputer\Process($processName)\Thread Count"
      $counters += "\\$TargetComputer\Process($processName)\Private Bytes"
      $counters += "\\$TargetComputer\Process($processName)\% Processor Time"
      return $counters

# This takes the generic counters and adds them to the master list along with all the processs based ones. I feel like this could probably be rolled up into the Return-CounterArray one, but it looks pretty like this
function Generate-Counters (){
    $allCounters = @()
    $allCounters += $GenericCounters
    $ProcessList | % {$allCounters += (Return-CounterArray $_)}
    return $allCounters

This section actually takes all of those processes in the list and generates the four different counters we want to track for each one. If I don’t use a function like this, we just end up with a colossal list in the script itself, and nobody wants that.

# Actually start the script now
$ReportArray = @() # this holds all the generate report objects for later export
$TempArray = @() # this holds the objects while we figure out the highest value
Write-host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Starting report on $TargetComputer. Please be patient as the process begins."
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Generating master list based on $($GenericCounters.Count) System counters and $($ProcessList.count) Processes."
$masterCountersList = (Generate-Counters)
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Master list created with $($masterCountersList.count) items."
$maxSamples = [Math]::Round(($monitorDuration*60/$samplingRate), 0)	 #multiplies your monitor duration minutes by 60 and divides by your sampling interval. Rounds to 0 decimal places because Integers
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Will take $maxSamples samples over the course of $monitorDuration minutes."
$rawCounterDump = @()

# This actually goes and gets the counter information. Woot. 
$rawCounterDump = Get-Counter -Counter $masterCountersList -SampleInterval $samplingRate -MaxSamples $maxSamples -ComputerName $TargetComputer -ErrorAction SilentlyContinue

# This will export everything to BLG files so you can review them in Perfmon later if you'd like (gives a pretty line graph!) 
if ($logPath[-1] -ne "\") {$logPath += "\"}
$endTime = (Get-Date -format "yyyy-MM-d_hhmmss")
$blgDump = $logPath+"$TargetComputer-$endTime-RawData.blg"
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Dumping raw Perfmon data to $blgDump."

Now we’re getting into what actually does the work.  We’re using our get-date cmdlet to track time along the way, just in case something hangs. We’re also declaring our different arrays for holding information.  I also have it give me a count of the counters and processes, mostly for diagnostic purposes. If something looks off, it probably is.  This is also where we do a bit of math to tell the Get-Counter cmdlet how many samples we’re taking. Since it only handles integers, we have to round it to 0 decimal places.  After the math and setup is complete, we get the counters from our target computer, log when we finish, and dump the raw data as a BLG file that you can open later in PerfMon.

# now that the raw data has already been exported, this chunk turns that raw data into an array for further processing.
$rawCounterDump | Export-Counter -Path $blgDump
$rawCounterDump.countersamples | % {
    $path = $_.Path
    $obj = new-object psobject -property @{
        ComputerName = $TargetComputer
        Counter = $path.Replace("\\$($TargetComputer.ToLower())","")
        Item = $_.InstanceName
        Value = [Math]::Round($_.CookedValue, 2)	
        DateTime = (Get-Date -format "yyyy-MM-d hh:mm:ss")
    $TempArray += $obj
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - $($TempArray.count) total samples collected."

Here, we are taking that raw data and converting it to an object array that is easier to search later.   Again, this outputs some diagnostic information just in case something looks off during the conversion.

# This bit takes all the entries in TempArray, gets the unique counter names, finds all entries for that counter name, looks for the highest (or lowest where it matters) value, and then adds only the matching entry to the "highest value" report
$UniqueCounters = ($TempArray | select -Property Counter -Unique).counter
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - $($UniqueCounters.count) unique counters discovered"
foreach ($c in $UniqueCounters)
    $targetEntries = $TempArray | ? {$_.Counter -eq $c}
    if ($c -eq "\PhysicalDisk(*)\% Idle Time" -or $c -eq "\Memory\Available MBytes" -or $c -eq "\Memory\Pool Nonpaged Bytes") {$highValue = ($targetEntries | Measure-Object -Property Value -Minimum).Minimum}
    else {$highValue = ($targetEntries | Measure-Object -Property Value -Maximum).Maximum}
    $selectedEntry = $TempArray | ? {$_.Counter -eq $c -and  $_.Value -eq $highValue}
    if ($selectedEntry.count -gt 1) {$selectedEntry = $selectedEntry[0]}
    $ReportArray += $selectedEntry

In this specific case, we wanted the most “significant” value for each counter over the measured time period. For available memory, that would be the lowest number, for a process CPU usage counter, it would be the highest number.   We find each uniquely named counter, look for everything that has that name, and then find the most significant value for that name.  Once we have it, we save it to our reporting array.

# Generates a file name based on what you asked the script to do, and dumps it to a CSV for manager-ization later. 
if ($targetProcess -eq "BLANK") {$outLog = $logPath+"$TargetComputer-$startTime-to-$endTime-Results.csv"}
else {$outLog = $logPath+"$TargetComputer-$TargetProcess-$startTime-to-$endTime-Results.csv"}
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Writing report to $outLog."
$ReportArray | Export-Csv -Path $outLog -NoClobber -NoTypeInformation -Force
Write-Host -ForegroundColor Green "$(get-date -format hh:mm:ss) - Complete.`n"

This generates the log file based on the parameters you provided earlier. If you were monitoring a specific process, this will name the log file based on that.  Otherwise, it just uses the computer name. The date and time are stamped on as well for future reference, and a spreadsheet is generated for later processing by managers who like spreadsheets.   If your IT department is like ours, you probably already know which processes are killing your CPU cycles, but this will let your manager know that you’ve done due diligence.  I hope this helps, and it sure beats taking screen captures of Perfmon for hours on end.


Below is an example of what we found on a test system. From the screenshot we determined that Tanium was no Bueno.

Counter Script


ALSO CHECK : We should talk about incremental collection updates…

Keeping the SCCM Cache Clean with DCM

, , , ,

SCCM Cache Clean

In environments with frequent software distributions the SCCM cache folder can quickly take up large amounts of disk space.  This really becomes problematic on older systems or virtual machines with limited amounts of disk space.  Our support team found themselves having to constantly track down systems with low disk space and clean the cache.  I came up with this DCM configuration item to automatically detect and cleanup content in the cache which is older than the given number of days.  The detection and cleanup scripts both write to their own application event logs so you can see a history of cleanup activities.

Creating the CI

Create a new configuration item and select Windows Desktops and Servers as the type of configuration item.

Choose the appropriate operating systems.  In my case I selected all operating systems as I wanted the cache to be cleaned across the board.

Create a new setting and set it’s type to script and data type to string.

Add a discovery script for detecting old items in the cache folder.  Customize the number of days as you see fit.  You can also modify the event log source if desired.

$MinDays = 30

New-EventLog -LogName SCCM_Cleanup -Source "DCM" -ErrorAction SilentlyContinue
Write-EventLog -LogName SCCM_Cleanup -Source "DCM" -EntryType Information -EventId 1000 -Message "Detection starting for Cleanup CCMCACHE" -ErrorAction SilentlyContinue

$UIResourceMgr = New-Object -ComObject UIResource.UIResourceMgr
$Cache = $UIResourceMgr.GetCacheInfo()
$count = ($Cache.GetCacheElements() | where-object {[datetime]$_.LastReferenceTime -lt (get-date).adddays(-$mindays)} | Measure-object).Count

Write-EventLog -LogName SCCM_Cleanup -Source "DCM" -EntryType Information -EventId 1003 -Message "Total obsolete items found: $count" -ErrorAction SilentlyContinue
Write-EventLog -LogName SCCM_Cleanup -Source "DCM" -EntryType Information -EventId 1001 -Message "Detection ending for Cleanup CCMCACHE" -ErrorAction SilentlyContinue


Next, define a remediation script.  Once again you can customize the number of days and event log name.  The number of days should match your detection script.

$MinDays = 30

New-EventLog -LogName SCCM_Cleanup -Source "DCM" -ErrorAction SilentlyContinue
Write-EventLog -LogName SCCM_Cleanup -Source "DCM" -EntryType Information -EventId 1010 -Message "Remediation starting for Cleanup CCMCACHE" -ErrorAction SilentlyContinue

$UIResourceMgr = New-Object -ComObject UIResource.UIResourceMgr
$Cache = $UIResourceMgr.GetCacheInfo()
$Cache.GetCacheElements() | where-object {[datetime]$_.LastReferenceTime -lt (get-date).adddays(-$mindays)} | foreach { $Cache.DeleteCacheElement($_.CacheElementID) }

Write-EventLog -LogName SCCM_Cleanup -Source "DCM" -EntryType Information -EventId 1011 -Message "Remediation ending for Cleanup CCMCACHE" -ErrorAction SilentlyContinue

The final step is to create your compliance rule.  Set the value to check against to 0 and check the run remediation script checkbox.

Now after you test the new CI assign it to the appropriate baseline(s) for your environment.  Now you can forget about having to manually cleanup the SCCM cache folder ever again.


ALSO CHECK : Have you heard about Get-WQLObject?