Microsoft products are showing a clear theme these days, and the System Center Suite is no acception, hybrid scenarios and cloud augmentation. And while there are those that see this as just a way to drive Azure adoption, this doesn’t need to be a bad thing. In this article, I’m going to cover how to take advantage of Azure Update Management, using the new Extension in System Center Virtual Machine Manager 2019.
As some of you would have seen, I spent some time last week getting familiar with Linux Containers on Windows Server 2019, and I thought I would share what I did to get it all up and running. Prerequisites To get started, you’ll need to have the following in place: A Windows Server 2019 VM or Bare Metal host (VM-Only) Nested Virtualization enabled (VM-Only) MAC Address Spoofing enabled Hyper-V and Container Windows Features Enabled.
One of the handy functions built into Powershell, is the ability to preview what would happen if you run a command. This could be as simple as wanting to make sure that your Remove-Item actually deletes the write files, or that Set-ADUser changes the right attribute. Hand in hand with -WhatIf is -Confirm, it will prompt you for high risk actions and confirm if you really want to perform the action, like deleting an AD user account.
I’d like to start with a shout out to Philip Elder, for he came up with the initial idea and script that I’ve used here. One thing that’s not always obvious when dealing with S2D Clusters is how much of your Storage Pool has been provisioned and how much capacity, if any, is left. To help with this, we came up with be script you’ll see at the bottom of this article.
If you’ve ever dealt with a SAN or Storage guy before, you’ll know that they usually have a huge passion for cache stats. This is because the secret sauce of accelerating cheap storage for years has been to stick a small amount of expensive but super fast flash in front of your slower spinning disk, or in recent years, your cheaper low endurance SSDs. Because of this, it was always a good idea to keep an eye on how your cache was going, making sure things like Cache Hit Misses were low, and that your Write Cache wasn’t overallocated.
If you’re running a Storage Spaces Direct (S2D) Cluster, you might have noticed some instability in recent months, specifically when it comes to patching and performing maintenance. Well you’re in luck because 5 days ago, Microsoft released a new KB article that helps explain why you might have seen issues. The scenario targeted by the Microsoft article is S2D Clusters running May (KB4103723) or later patch levels, where you experience Event ID 5120 during patching or maintenance, leading to things like CSV timeouts, VM pauses, or even VM crashes.
If you’ve been anywhere near Twitter or any Tech Blogs and News sites recently, you would have noticed that Microsoft have dropped their first cut of the next Long-Term Service Branch OS, Windows Server 2019, into the Windows Insider ring for people like you and me to start testing. Now most people (like me) don’t have a huge amount of spare hardware sitting round for times like this, especially for testing things like Storage Spaces Direct (S2D).
Hi all, Quite often the best information on new technology is actually found in blog posts and not actual documentation, and while the documentation for Storage Spaces Direct from Microsoft is great, some of the real gems are in the pre-GA blogs they put up. So below, I hope to keep up a list of essential blog posts from both Microsoft and independent bloggers for those of you who wish to really understand what’s happening under the hood!
UPDATE(2017-09-19): Microsoft have officially recognized the bug and have a KB describing the symptoms and workaround much like the below. See here: https://support.microsoft.com/en-us/help/4043361/disks-in-maintenance-mode-status-after-september-cumulative-update-kb I was patching our dev cluster the other day and came across a new issue when applying the latest September Cumulative Update (KB4038782), and it seems others on the internet have hit this issue as well. Background First, a bit of background on the expected behaviour when performing maintenance:
I’ve been deploying a few Storage Spaces Direct (S2D) clusters lately, and I noticed a slight mis-configuration that can occur on deployment. Normally when deploying S2D, the disk types in the nodes are detected and the fastest disk (usually NVMe or SSD) is assigned to the cache, while the next fastest is used for the Performance Tier and the slowest being used in the Capacity Tier. So if you have NVMe, SSD and HDD, you would end up with an NVMe Cache, a SSD Performance Tier and a HDD Capacity Tier.