Auto reseed is a fascinating subject, introduced for the very first time in Exchange 2013 it meets a need that existed throughout legacy versions. Throughout the previous exchange versions if a disk hosting a mailbox database were to go offline it would mean that the database copy hosted on that disk would be unavailable to exchange as well.
In Exchange 2010 that would have meant one copy to be impacted, with the new storage recommendations of hosting multiple copies per volume the impact of this becomes quite a bit larger. To minimize the risk of copies being reduced for a prolonged period of time “Auto Reseed” was introduced.
Simply put, it will take a spare disk (which you assigned) and assign it to replace the failed disk, automatically reseeding the mailbox databases hosted on the failed disk.
The downside, however, is that the feature doesn’t actually does any of the prerequisite configuration for you. Installing the disk correct, creating the directories and adding spare disks to the system, as well as replacing bad disks, all requires manual actions by an administrator.
The primary input condition for the Auto Reseed workflow is a database copy that is in an Failed and Suspended (F&S) state for 15 consecutive minutes. When that condition is detected, the following Auto Reseed workflow is initiated:
- Try to resume the database copy up to 3 times, with 5 minute sleeps in between each try. Sometimes, after an F&S database copy is resumed, the copy remains in a Failed state. This can happen for a variety of reasons, so this first step is designed to handle all such cases; AutoReseed will automatically suspend a database copy that has been Failed for 10 consecutive minutes to keep the workflow running. If the suspend and resume actions don’t result in a healthy database copy, the workflow continues.
- Next, AutoReseed will perform a variety of pre-requisite checks. For example, it will verify that a spare disk is available, that the database and its log files are configured on the same volume, and in the appropriate locations that match the required naming conventions. In a configuration that uses multiple databases per volume, AutoReseed will also verify that all database copies on the volume are in an F&S state.
- Next, AutoReseed will attempt to assign a spare volume up to 5 times, with 1 hour sleeps in between each try.
- Once a spare has been assigned, AutoReseed will perform an InPlaceSeed operation using the SafeDeleteExistingFiles seeding switch. If one or more database files exists, AutoReseed will wait for 2 days before in-place reseeding (based on the LastWriteTime of the database file). This provides an administrator with an opportunity to preserve data, if needed. AutoReseed will attempt a seeding operation up to 5 times, with 1 hour sleeps in between each try.
Once all retries are exhausted, the workflow stops. If, after 3 days, the database copy is still F&S, the workflow state is reset and it starts again from Step 1. This reset/resume behavior is useful (and intentional) since it can take a few days to replace a failed disk, controller, etc..
Before you begin
Be aware that it is best to do this in a lab beforehand. The configuration is quite simple but can be very confusing. The naming conventions you use are largely up to you but need to be consistent throughout all nodes in the database availability group. The EDB and log file path have to end in “.db” and “.log” respectively and have to be in certain locations. Additionally the number of copies had to be equal on each disk. You cannot have, say, 4 copies on one disk whilst having 2 on another. So it is important to plan out what the configuration will look like beforehand.
Currently there are 3 parameters you need be aware of when configuring Auto Reseed:
- AutoDagVolumesRootFolderPath: Under this folder we will create mount points for the disk we assign to our exchange servers
- AutoDagDatabasesRootFolderPath: This folder will be hosting the mount points for our databases.
- AutoDagDatabaseCopiesPerVolume: With this parameter we specify the number of database copies each volume will host.
As you can see things already can be quite confusing and we have not even started configuring the feature! It is easiest to remember that each volume you specify for the auto reseed feature will have multiple mount points, except for our spare disk(s).
The values I used in my lab are as follows:
- AutoDagVolumesRootFolderPath: C:\ExchVol
- AutoDagDatabasesRootFolderPath: C:\ExchDBs
- AutoDagDatabaseCopiesPerVolume: 2
I will have 3 disks configured, 2 disks hosting databases and 1 spare disk. My database availability group is name “DAG”. Each active disk will host 2 databases, for 4 databases in total. Naming is simple and they are as follows:
Two Exchange 2013 servers are deployed in my database availability group:
Step 1: Configuring the root paths for the databases and volumes
This is where it begins. Open the exchange management shell and enter the following:
Set-DatabaseAvailabilityGroup DAG -AutoDagDatabasesRootFolderPath "C:\ExchDbs"
This will set the root folder path for the databases to “C:\ExchDbs”.
Set-DatabaseAvailabilityGroup DAG -AutoDagVolumesRootFolderPath "C:\ExchVols"
With this command we configure the root folder path for the volumes to point to “C:\ExchVols”.
Set-DatabaseAvailabilityGroup DAG -AutoDagDatabaseCopiesPerVolume 2
And finally we configure the number of copies hosted per (active) volume to 2.
Notice that we do define which disks will be a spare disk. Exchange will automatically recognize a spare disk by it not having a mount point under the database root folder associated with it.
Step 2: Creating the directories
First of all we will have to create our root directories (the Database root path and the Volumes root path):
Once we have these created we have we have to work our way down and create a folder for each volume as well as a folder for each database:
Step 3: Associate the disks with the mount points
And this is where things get funny! Each of our 3 disks need to be associated with the right folder(s).
Our first disk will be tied to the folder “C:\ExchVols\Volume1” and to the database folders “C:\ExchDBs\DB01” and “C:\ExchDBs\DB03”.
Mountvol.exe c:\ExchVols\Volume1 \\?\Volume (GUID)
Mountvol.exe c:\ExchDBs\DB01 \\?\Volume (GUID)
Mountvol.exe c:\ExchDBs\DB03 \\?\Volume (GUID)
The second disk will be tied to the folder “C:\ExchVols\Volume2” and to the database folders “C:\ExchDBs\DB02” and “C:\ExchDBs\DB04”.
Mountvol.exe c:\ExchVols\Volume2 \\?\Volume (GUID)
Mountvol.exe c:\ExchDBs\DB02 \\?\Volume (GUID)
Mountvol.exe c:\ExchDBs\DB04 \\?\Volume (GUID)
The third and last disk will be a spare disk and needs to be tied to the folder “C:\ExchVols\Volume3”
Mountvol.exe c:\ExchVols\Volume3 \\?\Volume (GUID)
Yes, you noticed that correctly, there are drives associated with multiple folders and that is what confuses most people! This structure is necessary for Exchange to pick up on the configuration correctly.
Step 4: Creating the database directories
It’s important to do this step after associating the disks with their folders as creating them before doing this step is going to cause pain as you won’t be able to do so.
Step 5: Creating the mailbox databases
Finally! Our last step… With all the folders created and the disks associated with their appropriate folder(s) we can start creating our mailbox databases.
New-MailboxDatabase -Name DB01 -Server SFEX01 -LogFolderPath C:\ExchDBS\db01\db01.log -EdbFilePath C:\ExchDBs\db01\db01.db\db01.edb
New-MailboxDatabase -Name DB02 -Server SFEX01 -LogFolderPath C:\ExchDBS\db02\db02.log -EdbFilePath C:\ExchDBs\db02\db02.db\db02.edb
New-MailboxDatabase -Name DB03 -Server SFEX01 -LogFolderPath C:\ExchDBS\db03\db03.log -EdbFilePath C:\ExchDBs\db03\db03.db\db31.edb
New-MailboxDatabase -Name DB04 -Server SFEX01 -LogFolderPath C:\ExchDBS\db04\db04.log -EdbFilePath C:\ExchDBs\db04\db04.db\db04.edb
Step 6: Rinse and repeat!
Each of the steps 1 to 4 needs to be repeated on every node in the database availability group. That means creating all of the directories, identically, and associating the disks with the right folder(s).
Step 7: Creating the database copies
Once the nodes have been configured correctly we can create the copies (in my case it’s a 2 node database availability group)
Get-mailboxdatabase DB* -server SFEX01 | add-mailboxdatabasecopy –mailboxserver SFEX02
The databases will seed and should return a healthy state:
Auto Reseed recovery
As configuring it is just not enough I went a step further and disconnected the disk mounted hosting DB01 and DB03 (Volume1). I did this through removing the disk from Hyper-v (they are attached as SCSI disks, allowing for hot swapping).
When running the “get-mailboxdatabasecopyStatus” cmdlet we can see that our database has gone in to “Failed and suspended” status.
Running “get-mailboxdatabase” we can add to that and see that our databases for the failed volume are now active on our second node.
After our workflow went through the motions we can see that it had assigned the spare disk and seeded the database.
This should get completed within 15-30 minutes of detecting the issue, saving the administrator a whole heap of trouble of having to worry about replacing disks, uptime, risks calculations etc… Theoretically you could get away with having only quarterly scheduled replacement times for disks and not get in to trouble for it. Obviously, that is not a recommendation to make (seriously!) but Auto Reseed is a nifty feature that can help prioritize work and keep our nights sleep to a full night.