I’m catching up on some of the sessions from Ignite 2015. I wasn’t able to go, but thankfully many of the sessions are now available as videos with downloads of the presentations.
For some time now the Exchange team have defined a Preferred Architecture; the one for Exchange 2013 is here. This Preferred Architecture defines a set of best-practices including the use of multi-role servers and commodity physical servers.
These are some of my highlights:
A Preferred Architecture?
If you are buying an expensive item it’s generally a good idea to read the manufacturer’s manual; the Preferred Architecture is that manual for Exchange.
For Exchange 2013 the Preferred Architecture says this:
While Exchange 2013 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever.
While there are other supported deployment architectures, other architectures are not recommended.
In other words – we strongly recommend that you do this.
There is good reasoning in the Exchange 2013 Preferred Architecture for the recommendations that are made there. As Microsoft show more about Exchange 2016 then further reasoning for following the recommendations are being revealed.
Multi-Role to Single Role
Exchange 2010 and Exchange 2013 had a number of server roles (client access server, mailbox server), which could be split across different servers. In Exchange 2013 the Preferred Architecture was to deploy multi-role servers – in other words put all the roles on one server. In Exchange 2016 there is only the mailbox server, the architecture alternative of splitting the client access server role and the mailbox server role no longer exists. I never really understood the benefit of splitting them anyway; not having to discuss the alternative in the future will be most welcome.
Topology Requirements and Improvements
For co-existence you’ll need to be on Exchange 2010 SP3 RU11 or later. You’ll need Windows Server 2012 R2 to run Exchange 2016. You’ll also need your Active Directory to be at Windows Server 2008 R2 FFM/DFM or later, running on at least Windows Server 2008 R2.
From a client perspective you’ll need Outlook 2010 SP2, at least, you’ll also need some specific patches. Outlook 2013 will also need to be at SP1 and have some patches. This particular recommendation is, of course, open to change prior to Exchange 2016 shipping.
There are improvements in indexing which will result in significant reductions in inter-site replication traffic which is always welcome.
MAPI/HTTP is now the default protocol, MAPI/RPC is finally completely and utterly dead. The use of HTTP significantly improves the ability to deliver services from consolidated data centres and across slow networks, including the Internet.
These changes, and others make for a more seamless co-existence with Exchange 2013 and an improved migration experience. If you’ve previously followed the Preferred Architecture you are potentially in a place to drop an Exchange 2016 server into the environment and start using it quite quickly.
Building Block Hardware
Microsoft’s preferred architecture, as it was with Exchange 2013, continues to use physical commodity building block servers.
If I could recover all the hours that I’ve spent debating this point I would have invested them in practising sketching and I would now be a master illustrator. As the laws of physics don’t currently allow me to retrieve that time, I’ll continue to convince people of the error of their ways when they want to add virtualisation, RAID, SANS, backup and all sorts of other resiliency technology.
Ross Smith’s point is this Exchange has a full resiliency and recovery model so use it. In order to make the best use of that Exchange recovery model these are the recommendations for the building block server:
- Servers are deployed on commodity hardware
- Dual-socket systems only (20-24 cores total, mid-range processors)
- Up to 196GB of memory
- All servers handle both client connectivity and mailbox data
- JBOD storage
- Large capacity 7.2k SAS disks
- Battery-backed cache controller (75/25)
- Multiple databases/volume
- AutoReseed with hot spare
- Data volumes are formatted with ReFS
- Data volumes are encrypted with BitLocker
You are still limited to 100 databases a server and 16 servers in a DAG, but they are understandable, and I’ve never seen them become a significant constraint.
These are just my highlights, for more architecture changes get the presentation deck here or watch the video below: