Microsoft Storage Spaces
-
Microsoft has this thing called Storage Spaces, I don't know the history but it's something akin to RAID, but also like Intel RST. As a result of the later, on Microsoft systems apparently sometimes the factory configuration uses Storage Spaces.
So I have a Surface Laptop 2 on my desk, I imaged it like I image everything and it came up with a 512GB SSD; but also a second blank 512GB SSD. Looking at images of the motherboard what's happening here is the 1TB configuration uses two all-in-one SSD chips, each 512GB, each literally a distinct SSD, and they're supposed to be pooled under a single Storage Space to appear as one.
It's possible to configure these Spaces in powershell but I can't imagine there's any reasonable way to detect that this should be done, so configuration regarding pools would need to be added to the image profile and I'm unclear if the image needs to be built for Spaces or if windows will adapt. This may also require considering the physical disks differently while partitioning and formatting the drives.
I'm going to be looking into patching this into the core scripts soon
-
I've put together how to create the pools, this needs a lot of work but the basics are there
$matchedDiskSets = Get-PhysicalDisk | Group-Object 'FriendlyName' | Where-Object {$_.Count -gt 1} # assuming matched disks should always be pooled, this should be a configuration recieved from the server if ($matchedDiskSets -gt 0) { <# assuming all existing pools are bad should try to identify good pools to associate disks with pools: $pools = Get-StoragePool -isPrimordial $false -ErrorAction SilentlyContinue $disksInPool = $pools | foreach { get-physicalDisks -StoragePool $_ } #> Get-StoragePool -isPrimordial $false -ErrorAction SilentlyContinue | Remove-StoragePool # ignoring secondary pools for now $matchedDiskSets[0].Group | ForEach-Object { Clear-Disk ` -UniqueID $_.UniqueID ` -RemoveOEM ` -RemoveData ` -Confirm:$false ` -ErrorAction SilentlyContinue } $matchedDiskSets[0].Group | Reset-PhysicalDisk # create new pool and virtualdisk, I picked WindowsPool and WindowsDisk arbitrarily New-StoragePool ` -FriendlyName "WindowsPool" ` -StorageSubsystemFriendlyName "Windows Storage*" ` -PhysicalDisks $matchedDiskSets[0].Group ` -ResiliencySetting Simple ` | New-VirtualDisk -FriendlyName "WindowsDisk" -UseMaximum }
The critical problem resulting from this is that the virtual disk is not going to be disk 1. It seems to vary, this is from a log from a previous image attempt with a manually created pool.
** Starting Image Download For Hard Drive 3 Partition 3 [ERROR] Error reading 208 bytes from fd 0 (err=109): The pipe has been ended [ERROR] "[fd 0]": Error reading header: Broken pipe ERROR: Exiting with error code 50: Could not read data from a file.
The unit I'm working with presently made the virtual disk be disk 2 to start, and then disk 3 after a reboot.
PS X:\Windows\System32> get-physicaldisk Number FriendlyName ------ ------------ 1 Skhynix BC501 NVMe 512GB 0 Skhynix BC501 NVMe 512GB PS X:\Windows\System32> get-disk Number Friendly Name --------- ---------------- 2 WindowsDisk
and in wie_deploy.ps1 it's assumed that machine-side disks will relate 1-to-1 with disks in the image
line 326
$udpProc=$(Start-Process cmd "/c curl.exe $script:curlOptions -H Authorization:$script:userTokenEncoded --data ""profileId=$profile_id&hdNumber=$($hardDrive.Number)&fileName=part$wimSource.winpe.wim"" ${script:web}GetImagingFile | wimapply - 1 C: 2>>$clientLog > x:\wim.progress" -NoNewWindow -PassThru)
Which makes sense until pools are involved and there are gaps in the index of logical disks. This is the next thing I'm going to run down, not sure how I'm going to map the two indexes.
edit: the windows 10 installer can't see this disk so I'm probably missing a little detail somewhere.
-
I had to step away from this for a little while but I just ran into a small snag on it.
When a system is using Storage Spaces to pool drives, the drives do not appear in
PS> Get-Disk
, and this messes up the disk numbers. In wie_global_functions.ps1 : Get-Hard-Drives() we get an array of the valid disks$script:HardDrives=$(get-disk | where-object {$_.NumberOfPartitions -gt 0 -and $_.BusType -ne "USB"} | Sort-Object Number)
In wie_deploy.ps1 : Process-Hard-Drives() we foreach through that array with an external index to get the schema
$currentHdNumber = -1 foreach ($hardDrive in $script:HardDrives) { log " ** Processing Hard Drive $($hardDrive.Number)" -display -timeStamp $currentHdNumber++ [...] $script:hdSchema = $(curl.exe $script:curlOptions -H Authorization:$script:userTokenEncoded --data "profileId=$profile_id&clientHdNumber=$currentHdNumber&newHdSize=$($hardDrive.Size)&schemaHds=$script:imaged_schema_drives&clientLbs=$($hardDrive.LogicalSectorSize)" ${script:web}CheckHdRequirements --connect-timeout 10 --stderr -)
So now we have 3 numbers,
$script:hardDrive.Number
,$currentHdNumber
, and$script:hdSchema.SchemaHdNumber
. Under normal circumstances these are all the same number, but in Storage Spaces, and who knows maybe other RAID setups, hardDrive.Number is the system's index for that disk, while currentHdNumber and SchemaHdNumber are the position of the disk in an unordered array.That's all good and fine, great actually, because this disjoint allows the wie to ignore this problem.
currentHdNumber is used to request the schema so it will always request 0 first, and on most systems there's only one drive. Any time we want to get something from the server, we want this number.
hardDrive.Number then reflects any modification to the disk order. As long as we use hardDrive.Number to direct disk operations it'll work out.
In wie_deploy.ps1 : Download-Image(), the line used to stream the .wim onto the partition is
$udpProc=$(Start-Process cmd "/c curl.exe $script:curlOptions -H Authorization:$script:userTokenEncoded --data ""profileId=$profile_id&hdNumber=$($hardDrive.Number)&fileName=part$wimSource.winpe.wim"" ${script:web}GetImagingFile | wimapply - 1 C: 2>>$clientLog > x:\wim.progress" -NoNewWindow -PassThru)
Which uses hardDrive.Number to ask the server to send the image over. The server doesn't know about disk 3, so the request errors out. Changing this to currentHdNumber or SchemaHdNumber fixes it, and now the wie is able to deploy images to storage spaces
There is actually a secret 4th number, $script:imageHdToUse, which is just SchemaHdNumber