How To – In Place Upgrade

, , ,

Oh this has become a recent favorite topic of mine. There has been some great logic thrown out there to the community from the Wells Fargo dream team of Mike Terrill, and Gary Blok on their websites. If you are active on twitter follow these 4 accounts @AdamGrosstx, @MikeTerrill @GWBlok and @SCCMF12TWICE the 4 of us are frequently consulted on many things related to OSD, IPU, and Pre-Caching. Also, make sure you tell Mike to write his WaaS stuff already….that way I can read it and sound really smart when I am talking to customers!!! Anyways, let’s start the topic. I’m currently helping a customer out in my spare time with their Win 10 IPU scenario. What I am working to deliver with the short time constraints is a documented rinse/repeat solution for their future upgrades….until auto-pilot happens! The customer had previously had their MSP do the work, but uh let’s not go there.

This blogpost will give a high-level overview on how the entire IPU will occur. Future blogposts I will go into more detail on the specifics, and link you to where the original logic is from (if not my own). A bunch of the things I am working on at delivering in this customers environment is done with my friend and super talented IT Architect Chad Arvay who is legit AF on the intune/autopilot/co-management side of things. So look forward to him and I bringing some great content to the community for that in the near future.

At this customers site I am running with this approach.

 

Pre-Ground Work: Identify current OS Level of environment > and OS we are going to “Upgrade” or “Migrate” too. In this customers case we are moving from 1511 to 1703, and then potentially after to move to 1806

How to Select potentially Ready System: In the customers environment we are deploy the compat scan package against all 1511 systems as required. The people over at SystemCenterDudes have a great how to document for creating this package. This will be configured to Run from the DP. Users should not experience any problems while this scan is running. Once the systems run the scan, and report back data we can now better visualize the success/fail rate of systems to be upgraded in migration candidate dashboard which we will show shortly. I have deciphered a few more of the compat scan results that what I see out there in the community and will post the sql for that in a future post as well.

From the SCCM Admins perspective this action takes place from the 1511 collection, and if successful will move forward to the compat scan pass collection like below. I have all my collections numbered so you can easily see the process of what order things happen and where systems will go. We will get into the collection design/query in a future blogpost so you can make the magic happen too!

SCHEDULING SYSTEMS  -> DEPLOYING SYSTEMS

  1. The PM/Scheduling team will run the dashboard and select systems that are approved for deployment. Some environments will do this based on location/team/availablitity just use whatever suits your environment the best.
    • Identify Systems that are ready/not ready for upgrade based on criteria provided in the “Windows Migration – Candidates” report.
      • Use criteria provided in Migration Candidate Dashboard
        • Approved model for upgrade
        • Enough Space in Nomad Cache (15%)
        • Enough space on the system (30GB)
        • Upgrade Readiness compat scan results
        • Client Health
        • HW/WMI returned in 14 days
        • Reboot pending
        • Client Health evaluation

 

  1. The Deployment Team will receive list of systems for deployment and add to “3 – Migration Tracking List” collection. This collection is what we will use after the TS is finished as part of our “Windows Migration – Candidates” Anything in this collection will also populate the “3.x – PreFlight Tier 1” collection. This is the collection where the members will start to get updated client settings. In this collection is where the client setting is forcing check-ins to be more frequent, so policy gets to the systems, and i get data quicker! The customer should not notice much change (can still install apps, etc). This is also the collection where I have compliance items, and have some scripts ran against the system. This is the ground work for making sure enough space is cleared out on the system to start the TS. I have a 3 compliance items against this collection. They will 1. clear the CCM Cache of old content, clear out space on system (temp locations etc), and change Nomad Poling settings. The script I have deployed here will clear the Nomad cache of all content older than 30 days…please note that when I clear content from the Nomad Cache this will remove the same pacakgeIDs from the CCM Cache just in case you did not know…now you do!
  1. Systems will also populate the “4 – Begin Nomad Cache Content” collection, and also populate the “4.x xxxxx” model specific collections to get additional driver packages. Once all the content is “Precached” the system will be automatically added to another collection. In total the systems will require a specific 19 package IDs in order to start the TS later in this overview.
    1. NOTE: if a system does not show up here it is b/c of the “Safety” I have built in place with the compat pass collection. This means that the system either did not run, or it failed the compact scan. This should not happen in production if the criteria is provided
      NOTE: I will go into more detail on the Nomad Jobs, Client Setting to track Nomad Jobs, Compliance Item for Nomad Polling, SQL to track that logic, Collection (this is custom SQL that you don’t have a “dropdown” to choose from for the collections)
  1. Once the systems finish getting all 19 required pieces of content in this environment then, and only then will the system move to a “5.x model specific” collection. These different approved models, that are fully cached make up the “5 – Win 10 1703 UPGRADE DEPLOYMENT” it is in this collection where the shortcut icon is placed to start the TS, a scheduling tool is called, and the TS is available with a deadline date. NOTE: I tried to make all models work within 1 collection, but I just couldn’t make the query update efficiently so that is why this is currently broken out this way. 
  2.  Once systems are in this collection the users will have the desktop shortcut icon to do the upgrade.

I can do a future post on how to do shortcut icons to call a TS to kick off and how to clean that stuff when I get a minute. I do not necessarily like the shortcut icon, but it is convenient for users to be able to star the TS by clicking on it. Just be aware that users can easily miss the shortcut, or accidentally start the TS this way. ALSO when systems are in this collection There is another deployment which will call the “schedule tool” This will display on top of every item on the customers screen to perform the upgrade, or to schedule The upgrade. You can customize your own messages and configurations.This tool allows the user to schedule down to the second when they want to upgrade. Customers can also use this to communicate a location where they can read more on the process, or go ahead and start the upgrade now. The guys over at OneVinn came up with this tool here. This tool is great because of a “Nag factor” that displays the message on a scheduled basis above everything on your desktop. This tool should nag enough ppl to star the upgrade sooner than later!

  1. A few notes about the TS…the TS will first verify there is atleast 30 gb free space available, and if there is not enough space the TS will work to clean up more space. The TS will Uninstall/upgrade any non-compatible security app and then perform the upgrade. The upgrade will then install the latest driver package, and a few key apps. After this is complete the TS will then clean up space on the system before completing. This is all done in a high performance setting to speed up the process. I have a bunch of things that I was doing in this TS that I saw Gary do wayyyy more efficiently so I swapped some of my logic out to use Garys method. There is also the Console Extension from OneVinn for the collection move stuff so make sure you check out their stuff on technet too! I won’t go much in depth in future post on how my TS is designed but this is a quick screenshot for the curious

  2. NOTE: if you are curious about how to track OSD via SQL you can read about it in my blogpost here
  1. Once the systems runs the TS the logic will tattoo the systems with more data specific to the TS so it can be more accurately tracked and aid in future migration plans. Such data we will now get back is how many times the system attempted IPU, how long the IPU took, etc. That registry location will be HLKM\Software\WaaS\XXXX. I won’t provide a screenshot here b/c its way more limited than the way Gary uses it for his environment b/c we have different approached so Just check out his website for this. NOTE: information that is captured within this registry location will be added to a “Migration Status Dashboard” this is currently being drafted but once it’s done I will update this blogpost, and possibly release the core SQL code for the communities use but this will require you to extend client settings so just a heads up on that.
  2. At completion of the TS, systems will be moved to a “Completed”, or “Failure b/c” collection depending on exit code. The majority of failures for now will be recommended for bare metal deployment. In most remote cases where all the content is pre-cached we expect the upgrade process to take 45 minutes or less. I believe It is best for us to communicate the TS to take an estimated  2 hours to complete to the end customer.
  3. When the system finishes the TS all the data is reported back, and will be available in the “Migration Tracking Dashboard”. Among other things this dashboard will have a chart tracking info for systems upgraded in last 2 weeks, pie chart to identify systems that bare metal vs upgrade to latest approved OS, etc. Some things I will have on the dashboard look like this but will include more logic such as how long each system took on avg, how many attempts were made by system, etc. I’ll add the final screenshots later when the SQL/SSRS is ready but for now here is a sneak peak….this is an amazing report, trust me I know plenty of reports but this one is AMAZING!Obviously I have redacted stuff from some of these screenshots but you get the idea.

In short the Upgrade Sequence – Pass Scenario will look like this:

I have already started writing the follow up blogposts for how to do the collections, SQL logic, compliance items etc. So keep checking back for how you can get this going in your environment.

SQL Query to track OSD

,

Typically whenever I enter a new environment I get a wave of questions focused on infrastructure, patching, software, and OSD. More specifically the OSD questions, are how to track the TS, how to make the TS better, and how to troubleshot the TS in the event of a failure. This blog-post I will let you know what I use from the community & as well as the SQL Logic that I prefer to use to track the OSD process. I have already written a piece that helps you “Reduce OSD Times“, but you should also check out the blog my buddy Gary runs at GaryTown as I have been incorporating more, and more of his logic into my TS.  …Shoot I even stopped the way I do my wallpaper upgrade screens/lockouts b/c his way is more efficient lol.

 

Most of my customers typically have high turnover on the engineering staff so nobody onsite really knows much in depth about SCCM, or it’s some guy who just inherited the infrastructure and needs a hand to make things run smooth again. These are the community tools that I like to use to help me educate the customer specific on tracking the OSD Task sequence.

 

Status Message Queries by Nickolaj Andersen  – Especially helpful if you want to filter by all “Errors” this has helped me quickly identify serious problems in OSD Task Sequences. This is something I implement on DAY 1 at customers sites

OSD Dashboard by Thomas Larsen – Super helpful when trying to calculate how long customers OSD process takes, and where we can create improvement. I try to get my Tier 1 support to run this report to help them better understand the OSD Process, and to aid in their troubleshooting. This has some awesome visuals b/c management always loves visuals….and this is something else that I implement on DAY 1 at customers sites.

Task Sequence Monitor by Trevor Jones – This is just an awesome tool that tracks status messages from your specified OSD Task sequence…the best part is it is free.

 

I think all of the tools I listed above are great and that I stand behind them. In my day to day when I am designing a TS, or tracking 1 specific computer I typically prefer to just have SQL open and track directly that way….for me it’s just quicker, and I’m way more comfortable with SQL today than when I was 3 years ago. At any site I go you can always find me with my head buried in the ConfigMgr console, and SQL….usually saying whyyyyyy me lol jk. I like to do take the query below and modify for your specific needs so obviously. I typically need to look at specific systems, groups, steps, in the TS so I keep those ready to go in my where statements….just commented out.

 

When you copy and use this SQL Query make sure you are connected to your ConfigMgr DB, and specify your TS name, and the name of the specific machine you are looking up.

 

Select

       — vTSP.Name,

       Convert(VarChar(21), vTS.ExecutionTime, 100) AS [Execution Time],

       — vTS.ExecutionTime,

       VRS.Name0,

       vTS.Step,

       vTS.GroupName,

       vTS.ActionName,

       vTS.LastStatusMessageIDName,

       vTS.ExitCode,

       Vts.ActionOutput

From

       v_TaskExecutionStatus vTS

       LEFT JOIN v_R_System VRS ON Vts.ResourceID = VRS.ResourceID

       LEFT JOIN v_Advertisement vADV on vTS.AdvertisementID = vADV.AdvertisementID

       LEFT JOIN v_TaskSequencePackage vTSP on vADV.Priority = vTSP.Priority

WHERE

       vTSP.Name = ‘CBuck_SCCMF12TWICE_NiceAF_TSv2’ — Task Sequence Name

       AND DATEDIFF(dd,vTS.ExecutionTime,GetDate()) <3 — how far back to get data

       –AND vTS.GroupName = ”

       –AND ActionName in (”,”)

       –AND LastStatusMsgName NOT LIKE ‘%Ignored%’

       –ANd LastStatusMsgName Like ‘%Failed%’

       –AND LastStatusMsgName!= ‘NULL’

       AND VRS.Name0 = ”

       –AND vTS.ExitCode = ”

ORDER BY vTS.Step

 

In a future blog-post I will finally release my “14 pg OSD Troubleshooting Guide” that was created with Khalid Al alul , and Ricky RIchard who are 2 other well-known SCCM Guys in the Pub-Sec space….

NOTE: we can make this sql fancier, but for purposes of this blog-post….let’s not.

Quick and Dirty SCCM Application Deployment Reporting

, , ,

For this blog, I’m going to go over a very basic script I wrote to quickly get application deployment statuses without having to use the Configuration Manager GUI console. I find the Reporting\Deployments tab very slow to respond, and you still have to dig into each individual app if you want details.  What this function does is allow you to specify what you think is the name of the application and it will find anything like it, allow you to select the one you want, and then it will run a deployment summarization and output the results.

Examples: 

In this example, “Tanium Client” is the actual name of a software package, so it doesn’t need to pop up a prompt asking us which one we actually wanted.

 

In the above two images, I didn’t remember the full name of my Tripwire deployments, so I just typed “Tripwire” and let it look for it for me.
The Code: 
This is a fairly simple piece of code, but I wrote it to run quickly.  It first checks to see if the name you gave it when running already matches a software package. If not, it runs a WMI query (much faster than using Get-CMApplication with a where-object piped on) to look for anything like what you entered. It puts all results into an array which then selects the LocalizedDisplayName and filters out duplicates. Those unique values are then handed over to Out-GridView, which thanks to the -passthru flag assigns the selected result to our original $SoftwareName variable.  You’ll also see that I have to specify that I want the “.localizeddisplayname” NoteProperty. If you don’t do this, you’ll see the selected software name as “@LocalizedDisplayName={‘Whatever your name was’}”
After that selection process, everything proceeds as normal. It gets the CMDeployment object for that application and runs the Invoke-CMDeploymentSummarization cmdlet which refreshes the deployment status, gets you current numbers, etc.  After that, I have it get the deployment again and select what I found to be the relevant values for a list formatted report.  Because SummarizationTime is done in UTC, I have it convert to client local time for ease of reporting, and I also go ahead and do a quick math expression for the percent success, which is done with the Round function so we don’t get a dozen decimal places of needless precision.   The Query Time value was mostly tacked on at the end so the people I was getting this information for could see that the data was current.
Hopefully this is useful for someone out there. Thanks for reading!