Post OSD Scheduled Task

, ,

Post OSD Scheduled Task

Every organization handles OSD differently. Currently at our organization we do have some apps that have been ‘baked into the task sequence’  as an ‘Install Application’ step for a very long time and are needed on every single imaged machine. These work perfectly, install consistently and generally there are no ‘exceptions’ to a PC having the software.

This isn’t the case with all of our widely distributed applications, and rather than build the logic into the task sequence we let the collections do the work after imaging. After a period of time when the appropriate collections are updated on their various schedules these new machines happily receive their software and baselines and go on their merry way as we know all healthy SCCM clients do! In the day-to-day SCCM world this works perfectly fine. Machines are added to collections through whatever method your organization uses, whether it be Direct Membership, AD Group/OU Queries, name based queries, hardware inventory based queries and there is a general understanding that machines will pop into the collection and receive their Applications/Updates/CIs in good time.

When our techs image a machine it can be helpful for it to temporarily have some expedited policy refresh rates for a period of time to speed up those after-the-fact deployments. We had tried a few collection queries to catch these ‘new machines’ so that we could deploy some aggressive Client Settings to them, but generally there is never a perfect query. Usually you catch not just new machines but instead those risen from the dead pit of being stale, or they were rejoined to the domain, maybe the client was reinstalled or some form of in place upgrade happened.  As an alternative to the collection query I wrote up a Powershell script.

What Does It Do?

It creates a scheduled task! 

The script can be ran from a ‘Run Command Line’ or ‘Run Powershell Script’ step during OSD (Typically near/at the end) with various parameter options. You’ll have to toss it into a package to serve up to the task sequence of course. It will create a scheduled task that runs specified SCCM Client Policy requests at whatever interval you want for as long as you want. Also, the task deletes itself shortly after the duration has passed. I didn’t quite include all the of schedule types because there is a very long list. But most of the key ones are there and any others can be easily added. 

How Do I Use It?

New-ClientActionScheduledTask.ps1 -Schedule MachinePol -Interval 5

This will create a schedule task that runs every 5 minutes for 24 hours. I did say aggressive at least once up there right? The task will receive a generated name based on the requested schedules. The above would produce a task named –
SCCM Action Scheduler – [MachinePol]
Which calls a file (Start-CMClientAction.ps1)  that is generated and stored in c:\windows\temp

Both the task name and the file name can be specified as parameters to the script aptly named… -FileName and -TaskName

New-ClientActionScheduledTask.ps1 -Schedule AppEval, HardwareInv, UpdateEval, UpdateScan -Interval 30 -Duration 12
Task Sequence Example

Similar results with this execution of the line above. A task is created, but with it running every 30 minutes for a 12 hour period. The task will appear in task scheduler with a title –
SCCM Action Scheduler – [AppEval,HardwareInv,UpdateEval,UpdateScan]
Which is based on the actions provided.

Where Do I Git It?

GitHub! I intend to continue adding to this GitHub repository.

Neat Stuff:

I’ve also used a couple of bits of code you might find interesting to create the scheduled task. Maybe you’ve seen it, maybe you haven’t.


This will write-out the contents of many functions (note: function, not cmdlet). I am leveraging this to generate a .ps1 file that can be easily invoked by the task sequence.

Neat right? I wrote that function though… but what is more interesting is you can do this with some built in functions! Try ${function:Clear-Host} or ${function:Get-Verb} and you can see some of the magic behind at least some commands you’ve used. Many are compiled cmdlets and are simply not expandable like this but can be dug into in other ways.

$TaskDefinition.Settings.DeleteExpiredTaskAfter = "PT0S"

While this can be a bit odd to work with, the above piece of code allows our scheduled task to ‘delete itself’ after it expires. Specifically… zero seconds after it expires. T is just a delimiter between date and time (Eg. days vs hours/minutes/seconds). You will need to specify an ‘EndBoundary’ for this to function, which is what our ‘Duration’ is in this.

function New-ScheduledTaskTimeString {
        [Parameter(Mandatory = $false)]
        [int]$Hours = 0,
        [Parameter(Mandatory = $false)]
        [int]$Minutes = 0
    $TimeSpan = New-TimeSpan -Hours $Hours -Minutes $Minutes
    $TimeSpanDays = $TimeSpan | Select-Object -ExpandProperty Days
    $TimeSpanHours = $TimeSpan | Select-Object -ExpandProperty Hours
    $TimeSpanMinutes = $TimeSpan | Select-Object -ExpandProperty Minutes

    if ($TimeSpanDays -gt 0) {
        $OutputDays = [string]::Format("{0}D", $TimeSpanDays)

    if ($TimeSpanHours -gt 0 -or $TimeSpanMinutes -gt 0) {
        $Delimiter = 'T'
        if ($TimeSpanHours -gt 0) {
            $OutputHours = [string]::Format("{0}H", $TimeSpanHours)

        if ($TimeSpanMinutes -gt 0) {
            $OutputMinutes = [string]::Format("{0}M", $TimeSpanMinutes)

    [string]::Format("P{0}{1}{2}{3}", $OutputDays, $Delimiter, $OutputHours, $OutputMinutes)


You give me hours and minutes, I give you a PnDTnHnMnS string to use for a scheduled task.
Just a quick function I wrote for the purposes of dumping out usable strings for time intervals that task scheduler understands.

I opted to not use the *-ScheduledTask* commands available post-Win 7. You could ‘simplify’ the code a bit for the task creation if you don’t mind being incompatible with Windows 7 by using these.

Is this the right way to do it? Who knows! I’m sure with some very careful inspection and categorization of our collections and their refresh schedules we could help the situation in other ways. Still a neat bit of code.


N-2 WIM and Task Sequence Managment

, ,

N-2 WIM and Task Sequence Managment

After my experience at MMSDE and talking with not only other attendees, but speakers that I knew of and those that were new to me, I felt like I could add something that could help others out there as I have been helped along the way.

Managing multiple WIM files and task sequences can be genuinely time consuming, especially if you have been doing it manually.  That was why I’ve been automating as much as possible over the last year.  Extracting WIM files, making them patch current every month, and then building and capturing WIM files with various versions of Office.  Most of which I have been able to script through trial and error as I got a chance to.

What I wanted to share is my script for updating our Operating Systems and Upgrade packages using the N-2 method.  It is my latest script, but it can help set up your WIM files in the same way and see when I will post the other processes in the near future.   I do not currently have a lab environment to get screenshots yet, but that will also change as I figure out which way to go.

For this script, you can run this on any machine that has the Admin Console installed on it.

These are all examples of how the script is currently set up, to give you an idea for naming conventions for static packages that you can use and customize the script for your own environment.





For the upgrade packages, it is similar, however they are based on language using EN-US as the example.





NEW would be your newly updated WIM files and the _PROD your current WIM files.  1_PREV the previous month, and 2_PREV being two versions older.

Here are the steps to acquire your OSD WIM package ID’s:

#=+=+=+=+=+=+=+= OSD WIMS =+=+=+=+=+=+=+=
$NEWWIM = (Get-CMOperatingSystemImage -Name “NEW_*$OSVersion*WIM”)
$NEWWIMID = ($NEWWIM.package.ID)

$PRODWIM = (Get-CMOperatingSystemImage -Name “_PROD_*$OSVersion*WIM”)

$PREV1WIM = (Get-CMOperatingSystemImage -Name “1_PREV_*$OSVersion*WIM”)
$PREV1WIMID = ($PREV1WIM.package.ID)

$PREV2WIM = (Get-CMOperatingSystemImage -Name “2_PREV_*$OSVersion*WIM”)
$PREV2WIMID = ($PREV2WIM.package.ID)

Similarly, the following commands will find your upgrade packages:

#=+=+=+=+=+=+=+= Upgrade Packages =+=+=+=+=+=+=+=
# Using the US-EN version as an example
$NEW_ENUS = (Get-CMOperatingSystemInstaller -Name “NEW_*$OSVersion*en-us”)
$NEW_ENUSID = ($NEW_ENUS.package.ID)

$PROD_ENUS = (Get-CMOperatingSystemInstaller -Name “_PROD_*$OSVersion*en-us”)

$PREV1_ENUS = (Get-CMOperatingSystemInstaller -Name “1_PREV_*$OSVersion*en-us”)
$PREV1_ENUSID = ($PREV1_ENUS.package.ID)

$PREV2_ENUS = (Get-CMOperatingSystemInstaller -Name “2_PREV_*$OSVersion*en-us”)
$PREV2_ENUSID = ($PREV2_ENUS.package.ID)

Using the values obtained above, we first change the names of the OSD and Upgrade packages.

#=+= OSD =+=
Set-CMOperatingSystemImage -Id $PREV2WIMID -NewName “RETIRED_$OSVersion _WIM” -Description “RETIRED”
Set-CMOPeratingSystemImage -Id $PREV1WIMID -NewName “2_PREV_$OSVersion _WIM” -Description “2-Previous Month”
Set-CMOperatingSystemImage -Id $PRODWIMID -NewName “1_PREV_$OSVersion _WIM” -Description “1-Previous Month”
Set-CMOperatingSystemImage -Id $NEWWIMID -NewName “_PROD_$OSVersion _WIM” -Description “Current Month”

#=+= Upgrade =+=
Set-CMOperatingSystemInstaller -Id $PREV2_ENUSID -NewName “RETIRED_$OSVersion _en-us” -Description “RETIRED”
Set-CMOPeratingSystemInstaller -Id $PREV1_ENUSID -NewName “2_PREV_$OSVersion _en-us” -Description “2-Previous Month”
Set-CMOperatingSystemInstaller -Id $PROD_ENUSID -NewName “1_PREV_$OSVersion _en-us” -Description “1-Previous Month”
Set-CMOperatingSystemInstaller -Id $NEW_ENUSID -NewName “_PROD_$OSVersion _en-us” -Description “Current Month”

This will cycle the WIM packages to make the NEW the _PROD and the others trickle down to where 2_PREV becomes RETIRED.

(Note, one of my other processes that I will post will automatically use the RETIRED package and rename them NEW when building the new WIM files.)

Now, this next step you will have to customize the script for your environment.  This will update your task sequences with the _PROD packages.

#=+=+=+=+=+=+=+= Update Task Sequences =+=+=+=+=+=+=+=
# If managing multiple versions, you can set an if ($OSVersion -eq “XXX”) also and have the following steps between {}.
#=+= OSD =+=
$PRODWIM = (Get-CMOperatingSystemImage -Name “_PROD_*$OSVersion*WIM”)

Set-CMTSStepApplyOperatingSystem -TaskSequenceId XXXXXXXX -ImagePackage $PRODWIM -ImagePackageIndex 1 #Change XXXXXXXX to your task sequence ID.

#=+= Upgrade =+=
$PROD_ENUS = (Get-CMOperatingSystemInstaller -Name “_PROD_*$OSVersion*en-us”)

Set-CMTSStepUpgradeOperatingSystem -TaskSequenceId XXXXXXXX -UpgradePackage $PROD_ENUS #Change XXXXXXXX to your task sequence ID.

And that’s it.

I have uploaded the script to GitHub so you can grab it instead of copying and pasting from above.

Thank you for reading this far.

My name is Scott Graves, I have been doing IT since 1995 and am currently heavy into OSD, Task Sequences and Upgrades.  I have set up a Twitter (@shotgn22) if you have any questions.  Thank you Adam Gross of and Chris Buck of this page for taking the time to answer my questions the past few days.


ALSO CHECK : Quick and Dirty SCCM Application Deployment Reporting

How To – In Place Upgrade

, , ,

How To – In Place Upgrade

Oh this has become a recent favorite topic of mine. There has been some great logic thrown out there to the community from the Wells Fargo dream team of Mike Terrill, and Gary Blok on their websites. If you are active on twitter follow these 4 accounts @AdamGrosstx, @MikeTerrill @GWBlok and @SCCMF12TWICE the 4 of us are frequently consulted on many things related to OSD, IPU, and Pre-Caching. Also, make sure you tell Mike to write his WaaS stuff already….that way I can read it and sound really smart when I am talking to customers!!! Anyways, let’s start the topic. I’m currently helping a customer out in my spare time with their Win 10 IPU scenario. What I am working to deliver with the short time constraints is a documented rinse/repeat solution for their future upgrades….until auto-pilot happens! The customer had previously had their MSP do the work, but uh let’s not go there.

This blogpost will give a high-level overview on how the entire IPU will occur. Future blogposts I will go into more detail on the specifics, and link you to where the original logic is from (if not my own). A bunch of the things I am working on at delivering in this customers environment is done with my friend and super talented IT Architect Chad Arvay who is legit AF on the intune/autopilot/co-management side of things. So look forward to him and I bringing some great content to the community for that in the near future.

At this customers site I am running with this approach.


Pre-Ground Work: Identify current OS Level of environment > and OS we are going to “In Place Upgrade” or “Migrate” too. In this customers case we are moving from 1511 to 1703, and then potentially after to move to 1806

How to Select potentially Ready System: In the customers environment we are deploy the compat scan package against all 1511 systems as required. The people over at SystemCenterDudes have a great how to document for creating this package. This will be configured to Run from the DP. Users should not experience any problems while this scan is running. Once the systems run the scan, and report back data we can now better visualize the success/fail rate of systems to be upgraded in migration candidate dashboard which we will show shortly. I have deciphered a few more of the compat scan results that what I see out there in the community and will post the sql for that in a future post as well.

From the SCCM Admins perspective this action takes place from the 1511 collection, and if successful will move forward to the compat scan pass collection like below. I have all my collections numbered so you can easily see the process of what order things happen and where systems will go. We will get into the collection design/query in a future blogpost so you can make the magic happen too!


  1. The PM/Scheduling team will run the dashboard and select systems that are approved for deployment. Some environments will do this based on location/team/availablitity just use whatever suits your environment the best.
    • Identify Systems that are ready/not ready for In Place Upgrade based on criteria provided in the “Windows Migration – Candidates” report.
      • Use criteria provided in Migration Candidate Dashboard
        • Approved model for In Place Upgrade
        • Enough Space in Nomad Cache (15%)
        • Enough space on the system (30GB)
        • Upgrade Readiness compat scan results
        • Client Health
        • HW/WMI returned in 14 days
        • Reboot pending
        • Client Health evaluation

In Place Upgrade


  1. The Deployment Team will receive list of systems for deployment and add to “3 – Migration Tracking List” collection. This collection is what we will use after the TS is finished as part of our “Windows Migration – Candidates” Anything in this collection will also populate the “3.x – PreFlight Tier 1” collection. This is the collection where the members will start to get updated client settings. In this collection is where the client setting is forcing check-ins to be more frequent, so policy gets to the systems, and i get data quicker! The customer should not notice much change (can still install apps, etc). This is also the collection where I have compliance items, and have some scripts ran against the system. This is the ground work for making sure enough space is cleared out on the system to start the TS. I have a 3 compliance items against this collection. They will 1. clear the CCM Cache of old content, clear out space on system (temp locations etc), and change Nomad Poling settings. The script I have deployed here will clear the Nomad cache of all content older than 30 days…please note that when I clear content from the Nomad Cache this will remove the same pacakgeIDs from the CCM Cache just in case you did not know…now you do! In Place Upgrade
  1. Systems will also populate the “4 – Begin Nomad Cache Content” collection, and also populate the “4.x xxxxx” model specific collections to get additional driver packages. Once all the content is “Precached” the system will be automatically added to another collection. In total the systems will require a specific 19 package IDs in order to start the TS later in this overview.
    1. NOTE: if a system does not show up here it is b/c of the “Safety” I have built in place with the compat pass collection. This means that the system either did not run, or it failed the compact scan. This should not happen in production if the criteria is provided
      NOTE: I will go into more detail on the Nomad Jobs, Client Setting to track Nomad Jobs, Compliance Item for Nomad Polling, SQL to track that logic, Collection (this is custom SQL that you don’t have a “dropdown” to choose from for the collections)
      In Place Upgrade
  1. Once the systems finish getting all 19 required pieces of content in this environment then, and only then will the system move to a “5.x model specific” collection. These different approved models, that are fully cached make up the “5 – Win 10 1703 UPGRADE DEPLOYMENT” it is in this collection where the shortcut icon is placed to start the TS, a scheduling tool is called, and the TS is available with a deadline date. NOTE: I tried to make all models work within 1 collection, but I just couldn’t make the query update efficiently so that is why this is currently broken out this way. In Place Upgrade
  2.  Once systems are in this collection the users will have the desktop shortcut icon to do the In Place Upgrade.

In Place Upgrade

I can do a future post on how to do shortcut icons to call a TS to kick off and how to clean that stuff when I get a minute. I do not necessarily like the shortcut icon, but it is convenient for users to be able to star the TS by clicking on it. Just be aware that users can easily miss the shortcut, or accidentally start the TS this way. ALSO when systems are in this collection There is another deployment which will call the “schedule tool” This will display on top of every item on the customers screen to perform the upgrade, or to schedule The upgrade. You can customize your own messages and configurations.This tool allows the user to schedule down to the second when they want to upgrade. Customers can also use this to communicate a location where they can read more on the process, or go ahead and start the In Place Upgrade now. The guys over at OneVinn came up with this tool here. This tool is great because of a “Nag factor” that displays the message on a scheduled basis above everything on your desktop. This tool should nag enough ppl to star the upgrade sooner than later!In Place Upgrade

  1. A few notes about the TS…the TS will first verify there is atleast 30 gb free space available, and if there is not enough space the TS will work to clean up more space. The TS will Uninstall/upgrade any non-compatible security app and then perform the upgrade. The In Place Upgrade will then install the latest driver package, and a few key apps. After this is complete the TS will then clean up space on the system before completing. This is all done in a high performance setting to speed up the process. I have a bunch of things that I was doing in this TS that I saw Gary do wayyyy more efficiently so I swapped some of my logic out to use Garys method. There is also the Console Extension from OneVinn for the collection move stuff so make sure you check out their stuff on technet too! I won’t go much in depth in future post on how my TS is designed but this is a quick screenshot for the curious
  2. In Place Upgrade
    NOTE: if you are curious about how to track OSD via SQL you can read about it in my blogpost here
  1. Once the systems runs the TS the logic will tattoo the systems with more data specific to the TS so it can be more accurately tracked and aid in future migration plans. Such data we will now get back is how many times the system attempted IPU, how long the IPU took, etc. That registry location will be HLKM\Software\WaaS\XXXX. I won’t provide a screenshot here b/c its way more limited than the way Gary uses it for his environment b/c we have different approached so Just check out his website for this. NOTE: information that is captured within this registry location will be added to a “Migration Status Dashboard” this is currently being drafted but once it’s done I will update this blogpost, and possibly release the core SQL code for the communities use but this will require you to extend client settings so just a heads up on that.
  2. At completion of the TS, systems will be moved to a “Completed”, or “Failure b/c” collection depending on exit code. The majority of failures for now will be recommended for bare metal deployment. In most remote cases where all the content is pre-cached we expect the In Place Upgrade process to take 45 minutes or less. I believe It is best for us to communicate the TS to take an estimated  2 hours to complete to the end customer.
  3. When the system finishes the TS all the data is reported back, and will be available in the “Migration Tracking Dashboard”. Among other things this dashboard will have a chart tracking info for systems upgraded in last 2 weeks, pie chart to identify systems that bare metal vs In Place Upgrade to latest approved OS, etc. Some things I will have on the dashboard look like this but will include more logic such as how long each system took on avg, how many attempts were made by system, etc. I’ll add the final screenshots later when the SQL/SSRS is ready but for now here is a sneak peak….this is an amazing report, trust me I know plenty of reports but this one is AMAZING!In Place UpgradeIn Place UpgradeObviously I have redacted stuff from some of these screenshots but you get the idea.

In short the In Place Upgrade Sequence – Pass Scenario will look like this:

In Place Upgrade

I have already started writing the follow up blogposts for how to do the collections, SQL logic, compliance items etc. So keep checking back for how you can get this going in your environment.