The category confusion that produces bad federal cloud builds
Most cloud architects arrive at their first federal engagement assuming the work is a stricter version of the commercial landing zone they already know. The story they tell themselves is: same Cloud Adoption Framework, tighter policies, more audit logging, slower change control. With those three adjustments, they reason, a federal landing zone is just a hardened commercial one.
It is not. Federal-grade landing zones are a different architecture. The identity model is different. The network topology is different. The data classification handling is different. The operating-procedure overlay is different. The stakeholder relationships are different. Trying to ship a federal landing zone by tightening a commercial one is the single most common reason year-one federal cloud engagements run six months over plan.
This post walks through what actually changes — the dimensions where federal-grade design differs from commercial CAF — based on the federal engagements we have delivered from our Abu Dhabi headquarters over the past several years. Most of the specifics are Microsoft Azure on UAE North (the federal-grade Azure region for the UAE); the architectural principles transfer to AWS Middle East UAE for AWS-anchored federal portfolios with the obvious vendor translations.
Identity federation: the most consequential difference
Commercial Azure landing zones federate Microsoft Entra ID with whatever the customer's primary identity source is — usually their on-premise Active Directory, sometimes a parent-company Entra ID tenant, occasionally a partner federation. The federation flow is well-understood and the configuration is largely template-driven.
Federal Entra ID federations integrate with federal directories that have specific trust, audit and operating-procedure requirements not present in commercial federation. Three things change. First, the trust relationship is governed by federal policy and requires formal approval through the federal-stakeholder process. Second, the audit logging for federated authentication events streams to federal-stakeholder dashboards as well as the tenant's own log workspace. Third, the operating procedures for managing the federation — credential rotation, federation metadata refresh, trust certificate replacement — follow federal standards rather than the commercial best-practice the engineer might be used to.
For cleared work, identity becomes more involved. Personnel security clearance under FAHR (Federal Authority for Government Human Resources) determines which engineers can administer which parts of the tenant. The PIM configuration enforces this at the role level. Cleared administrators have access. Uncleared ones do not — not just at the role level but at the network and physical-access level. The audit trail for any clearance-gated action is part of the standard federal evidence base.
Classification-driven residency enforcement
Commercial residency is binary. Either the workload is in the residency-compliant region or it is not. Configuration is a per-resource decision.
Federal classification is multi-tier. Different classification levels have different residency requirements, different access controls, different audit-logging characteristics and different egress controls. The landing zone has to encode classification as a structural property of where workloads land, not as a per-resource property managed by humans. Azure Policy at the management-group level enforces classification-driven residency — a workload tagged as a specific classification can only land in the corresponding management group branch with the corresponding region restrictions enforced.
Classification labelling has to flow consistently from data to workload to resource to log. The same classification tag a data steward applies to a document needs to map cleanly to the resource group that hosts the application processing that document and to the log workspace that stores the audit trail. Drift between these layers is one of the most common federal audit findings.
Federal-stakeholder monitoring integration
Commercial landing zones stream logs to a SIEM the customer operates (or a managed SOC the customer engages). The audit boundary stops at the customer's edge.
Federal landing zones stream selected log categories to federal-stakeholder dashboards alongside the customer's own visibility stack. The Cyber Security Council and the relevant sector-regulator dashboards may require live integration. The technical configuration is straightforward — log streaming via Event Hub or direct API — but the operating model is different. Federal stakeholders see the logs in near-real-time, which changes the incident-response dynamic. The customer's SOC has to assume the federal stakeholder may see an alert before the customer does and may ask about it.
This is not a bad thing once the operating model adjusts to it. The Federal Cyber Security Council brings substantial threat-intelligence visibility that a single customer's SOC cannot match. The federal-stakeholder relationship becomes part of the operational defence. The first few months of any federal engagement are mostly the operating-team learning to work with this rhythm rather than against it.
Network topology and the role of ExpressRoute
Federal landing zones rely heavily on ExpressRoute private peering for connectivity. Public internet egress is restricted by policy for classification-bearing workloads — workloads communicate with federal directories, federal-stakeholder dashboards and partner federal entities through private connectivity rather than internet routing.
The hub-and-spoke design therefore has a different shape. The hub is denser — additional ExpressRoute circuits, additional firewall capacity for the inter-entity traffic, additional security inspection points. The spokes are more segmented — classification levels have their own spokes with the egress rules enforced at the spoke boundary, not the resource boundary.
Operating-procedure overlay
Commercial change management runs on the customer's own cadence. Federal change management runs on the federal-entity's cadence, which is usually slower for production-affecting changes and has additional approval steps for changes that touch federated identity, audit logging or classification-handling configuration.
This is the dimension that surprises commercial-experienced cloud engineers the most. A configuration change that takes hours in a commercial engagement may take weeks in a federal engagement. Not because the federal change-management is inefficient, but because the change has stakeholder reach that the commercial change does not. The operating model has to accept this and plan against it.
What does not change
Three things are common to commercial and federal landing zones. The underlying Azure resource model. The Microsoft technical documentation. The hyperscaler's shared-responsibility model. These three are the reason cloud engineers think federal is "just a stricter commercial" — they recognise the building blocks. The architecture that uses those building blocks is what changes.
Bottom line
Federal-grade landing zones are a different architecture, not a stricter commercial one. The identity federation is different. The classification handling is structural rather than per-resource. The network topology assumes private connectivity for federal-stakeholder integration. The operating model accepts federal-stakeholder reach as part of standard operations.
For federal entities planning a cloud journey, the lesson from the engagements we have delivered is that year-one is design-heavy and stakeholder-heavy. By year two, with the landing zone signed off and the operating relationships established, the engagement looks much more like a normal managed-cloud rhythm. Continuity matters here more than in almost any other category of work — the relationships with federal stakeholders compound, and so does the trust.