Feature Update Errors

I’ve now consulted to about half a dozen environments where we have leveraged the feature update approach for windows servicing. I’m seriously considering every upgrade forward to be done this way instead of using highly complex in place upgrade task sequences. There are some drawbacks for reporting but we can PowerShell to capture data, and perform many custom actions. These last few months have been very focused on windows migrations specifically implementing the feature update approach. I’ve also started to focus more on on cyber-security both offensive/defensive. I will blog some of that stuff when I get approval from the customer

OxC190010A 

Error Not Found

This means one of your commands are misspelled in the setupconfig.ini. Make sure your commands are typed out correctly per this document below.

https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-setup-command-line-options

 

0x80070490

Element Not Found

If you run into this badboy the problem exists in your setupconfig.ini this was either malformed or your header information is messed up like you have brackets missing.

0xC190010D

Error Not Found.

I honestly forgot what I did, b/c it was national tequila day and I got a little wasted…but whatever I did fixed it.

0x80070003

System Cannot Find the file path specified.

FInally, we have an error code that can tell you something immediately w/o having to dig into logs. If you run into this make sure your directories are created that you cite in your setupconfig.ini. I’ve implemented a hidden TS that runs in advance which creates directories, stages required content like my driver packages, setupconfig.ini, custom run scripts, setupcomplete.cmd and other PowerShell scripts. One thing I recommend to environments is that they leverage a custom compliance item to verify folders/files are in place to avoid failures. This will also go into your collection design as a safety mechanism to keep the wrong systems from seeing the advertisement. I’ll blog the full end to end solution in August.

0x80244022

Same as HTTP Status 500 – the service is temporarily overloaded.

ok so this isn’t completely feature update exclusive. This is patching related, but I was trying to test feature updates and this problem occurred. Take a look at your sccm infrastructure as this site system cpu was pegging at 99%. I’ll write a post on how we troubleshoot that problem and how to recover so your environment can be stable again.

Key Troubleshooting Locations: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-setup-log-files-and-event-logs

NOTE: In 0xC190010A, 0x8007049A the error was so early in the process that the key log file locations in C:\$Windows.bt\ and the C:\windows\panther\unattendgc did not have updated log files covering the info. That might be a take away from the feature update product group to investigate.

Policy Evaluation Errors – IIS Connections

In this scenario the customer is trying to image systems from within WinPe. They are able to select a task sequence, but then once policy dependencies evaluate the customer receives the “An error occurred while retrieving policy for this computer (0x80004005) message. This is much a generic error and I’ve covered how to fix a dozen or so different errors in my Troubleshooting OSD Task Sequences document that I’ve have been meaning to publish for a few years, but I have given it to various customer accounts over the years.

 

This error is reported as happening right after the task sequence selection occurs when the system looks up policy for the system. Task sequence the log file is in ram you will see it at X:\Windows\Temp\SMSTSLog\smsts.log

 

We can see in the log file there are problems communicating with the management point retrieving policy during the lookup. The usual troubleshooting of making sure sccm is functional, boundaries, etc can be skipped since we know the specific problem and we have the historical knowledge of the IIS Connections This environment just recently had a SCCM outage and there was a significant amount of policy request backlog for the environment to catch up. At the time of the customers attempts to run an OSD tasks sequence the IIS connections was set to 500, and needed to be raised to a higher number, or to unlimited (depending on how quickly the backlog of policy requests progressed) That will be covered in another blog-post as the situation is still being monitored.

If you access your SCCM Site Server. In this environment the MP co-located. Open up IIS > navigate to your Default Website > Configure Limits

 

This was originally set much lower to 500 to let the CPU usage drop to a more stable rate to process though policies etc. I would recommend if trying to recover from high CPU usage due to a number of different reasons that you limit the connections and gradually increase that number.

The customer was then able to start imaging systems once the number was increased. When you are trying to recover sccm services make sure you keep an eye on the cpu usage in case you have to throttle that number back down, or if you can keep increasing the connections number.

WSUS still runs old code

As a consultant I’ve seen a fair number of environments, and the story is usually the same. Most environments are not leveraging ConfigMgr to it’s fullest capabilities. Today I’m not going to talk about migrating to the cloud, and Intune today. Although some of that will be coming soon, along w/ Win 10 servicing, to include custom actions and some automation scripts that I use to perform some record keeping tasks. I’m a firm believer in data driven decisions as the reporting in service model is a bit lackluster so I’ll give you the tools to help w/ that. Also in a few future blogpost I’ll be dropping some info on cyber threat hunting, identifying breaches earlier,incident response, and how to ******** *** ******** ******. That last part was super classified so I won’t talk about that.

Anyways. Back when we started WIn 10 to Win 10 migrations to match Microsoft win 10 upgrade cadence about ~ 5 years ago there was the in place upgrade task sequence and servicing model. Many customers chose the task sequence route as there was a bit of familiarity to it, but also b/c of the amount of customization work that had to be done for their environments. The servicing model has it’s perks, but it’s not as robust I would say in comparison to the task sequence. These days (1709 forward) I’m trying to move away from task sequences, and more into the servicing model with these dynamic updates. If you aren’t aware but dynamic updates CAN ONLY BE PULLED FROM THE INTERNET. This will likely be several hundred MB of content, and with the servicing model this action is unknown to the end user, when you do the IPU process the end user is completely impacted during the time b/c of a big task sequence box on the screen. Since this can only be obtained from the internet you can’t exactly service your IPU package w/ this.

If you are in an environment where you do not have servicing enabled and you need handle some of the pre-reqs to allow for this. There are two links here to review. SystemCenterDudes and Prajwal Desai

 

So we enable the upgrades on your SUP, the pre-work is done, and you select sync.

…but what, what if nothing is populated, so it looks like this

Empty Servicing Node

Well, I’ve seen this frequently in the last 10 years in at-least half a dozen environments. Basically when you check your WSYNCMGR.log everything starts out fine. The normal process is the notification file is found, upstream is found, categories are synced, and then updates start syncing. If an update is already synced it will be skipped etc.

All is good until we get to the feature upgrades that we just enabled. We start seeing “The Microsoft Software License Terms have not completely downloaded and ~~cannot be accepted.”, “Too many consecutive failures. Aborting Sync.”

You can also check your event viewer for some error code information.

I’ve seen this issue before, actually several times, in many environments. Ultimately the fix ends up being to remove all classifications from the SUP > WSUSUTIL RESET > Run Sync > Add Classifications > Run Sync > Monitor Logs.

Check out this Link for how to run WSUS Reset

Once we run the reset and the sync completes we are now able to see the servicing node populated.

 

Ultimately I wrote this blog to cite myself to say I’ve seen this problem many times, for many years and that it is still not addressed. When you are dealing on timelines this problem adds several hours, or even days to the project b/c of having to perform the reset then download content and metadata. but let me tell you a funny joke. Why did the chicken cross the road? I don’t know but WSUS runs on old code, like 2003 code from before I was even in high school.

 

NOTE: I hope David James sees this, and he takes over the WSUS stuff, and makes it better, like everything else he makes better.