Windows Server

Exploring Windows Server 2025: What’s New and What’s Changed - Part 1

Exploring Windows Server 2025: What’s New and What’s Changed - Part 1

Last month, Microsoft unveiled Windows Server Insider build 26040—the inaugural preview branded as Windows Server 2025. As seasoned Windows Server enthusiasts, we’re eager to delve into the enhancements and evolutions this release brings. In this Part 1, we’ll deploy the fresh Windows Server build, meticulously compare available Windows Features and Roles, and scrutinize any modifications to in-box PowerShell modules. Buckle up for an insightful journey through the latest iteration of Windows Server!
Forecasting Azure Stack HCI Cache Wear

Forecasting Azure Stack HCI Cache Wear

So you’ve set up an Azure Stack HCI Cluster and everything’s running great, but there is this nagging feeling in the back of your mind. It’s a hybrid setup, with some type of flash cache sitting in front of spinning disk, and you start to wonder how hard you’re pushing that cache, and how long it will last. Thankfully with Windows Server 2019, there are many in-built tools and commands to help work out just that!

You need to change the way you patch S2D Clusters

If you’re running a Storage Spaces Direct (S2D) Cluster, you might have noticed some instability in recent months, specifically when it comes to patching and performing maintenance. Well you’re in luck because 5 days ago, Microsoft released a new KB article that helps explain why you might have seen issues. The scenario targeted by the Microsoft article is S2D Clusters running May (KB4103723) or later patch levels, where you experience Event ID 5120 during patching or maintenance, leading to things like CSV timeouts, VM pauses, or even VM crashes.

Using WS2016Lab to test Windows Server 2019

If you’ve been anywhere near Twitter or any Tech Blogs and News sites recently, you would have noticed that Microsoft have dropped their first cut of the next Long-Term Service Branch OS, Windows Server 2019, into the Windows Insider ring for people like you and me to start testing. Now most people (like me) don’t have a huge amount of spare hardware sitting round for times like this, especially for testing things like Storage Spaces Direct (S2D).

Bug when applying KB4038782 September CU to Storage Spaces Direct Clusters

UPDATE(2017-09-19): Microsoft have officially recognized the bug and have a KB describing the symptoms and workaround much like the below. See here: https://support.microsoft.com/en-us/help/4043361/disks-in-maintenance-mode-status-after-september-cumulative-update-kb I was patching our dev cluster the other day and came across a new issue when applying the latest September Cumulative Update (KB4038782), and it seems others on the internet have hit this issue as well. Background First, a bit of background on the expected behaviour when performing maintenance:

S2D Storage Tiers misconfiguration bug

I’ve been deploying a few Storage Spaces Direct (S2D) clusters lately, and I noticed a slight mis-configuration that can occur on deployment. Normally when deploying S2D, the disk types in the nodes are detected and the fastest disk (usually NVMe or SSD) is assigned to the cache, while the next fastest is used for the Performance Tier and the slowest being used in the Capacity Tier. So if you have NVMe, SSD and HDD, you would end up with an NVMe Cache, a SSD Performance Tier and a HDD Capacity Tier.

Expanding Storage Spaces Direct Volumes

As many of you would have seen, Windows Server 2016 has been officially launched, with evaluation media available and General Availability slated for later this month. One of the great new features in this release, is Storage Spaces Direct, a Software-Defined Storage Solution. There is already plenty of information available on how to get this up and running on Microsoft Docs, but I thought I’d share some of the operational tasks that aren’t so obvious, starting with expanding volumes.