IP Care Enterprise Service

UFC in the UAE (2020–2025) — Five Years of Pay-Per-View IT, One Operating Model

How IP Care has delivered arena WiFi, live production LAN, broadcast uplinks and security operations for UFC events on Yas Island across five consecutive years — from Fight Island in 2020 to the current run at Etihad Arena.

Overview

Most event IT engagements are single-shot. The team builds the infrastructure, runs it for the event window, tears it down, files the report, walks away. The next event is a different client, a different venue and a different brief.

UFC in the UAE has been the opposite. What began in 2020 as a COVID-era bubble — a self-contained event environment on Yas Island while most of global sport was shut down — has run continuously for five years. Same venue family. Same broadcast partners. Same operating expectations. Same IT delivery team. Many of the engineers on the floor for the 2025 events were on the floor in 2020. That continuity is unusual in event work and it is the single most important factor in why the operation runs the way it does.

This case study walks through the technical architecture, the operating model and what changed year-over-year across the five-year run.

— The brief —

UFC arena events are different from tournament IT in three specific ways. The first is recurrence and rhythm: UFC events in the UAE have happened multiple times per year, with a predictable cadence, in a small number of related venues. Tournament IT is built, run and torn down once. UFC IT is built, refined, and run again — and again, and again.

The second is the production LAN. UFC produces its own live pay-per-view broadcast to a global audience, with strict latency and jitter tolerances on the production network. The host broadcaster expectations on tournament IT are exacting; the UFC production LAN expectations are tighter in some specific dimensions — particularly the rules around redundancy on cameras, audio, replay and graphics traffic.

The third is the regulatory and operational interface with the Abu Dhabi Sports Council, ADMCC for security camera infrastructure, TDRA for spectrum allocation and the venue management team. These relationships compound year-over-year. After five years they are deep, well-rehearsed and effectively procedural.

— Origins on Yas Island, 2020 —

In mid-2020 UFC was looking for a venue capable of hosting controlled, bubble-style events while most of the world was operating under pandemic restrictions. Yas Island had the venue capacity, the hotel-bubble logistics, the regulatory environment and the political will to make it work. What it needed was IT infrastructure that could be stood up at short notice and run reliably without large on-site teams.

The first event run was tight. The deployment window was compressed by quarantine and arrival logistics. Half the engineering decisions had to be made in the venue itself because there was no realistic way to do site surveys at the usual depth before kit had to land. The first night went live with everything working and nothing surplus on the bench. We learned what the spec needed to be by running it once at the absolute minimum, and every year since has built the contingency back in.

— Evolution of the architecture —

The 2020 build was conservative in topology and lean in kit. By 2021 we had moved to a permanent dual-controller WiFi posture across the venue and added a parallel staging environment so that pre-event validation could happen without affecting the live network. By 2022, with multiple events per year locked into the calendar, we moved to a near-permanent network footprint at Etihad Arena — the same switching core, the same firewall stack, the same SSID architecture from event to event with a documented pre-event validation runbook. By 2023 the production LAN had moved to a fully redundant Catalyst 9500-class design and the broadcast uplinks had been re-engineered with quad-redundant carrier paths.

By 2024 and 2025, the architecture had matured into what is effectively a long-running event platform with periodic re-validation, rather than a series of one-shot deployments. Most pre-event work is now configuration drift detection, firmware currency, hardware capacity headroom and rehearsal of the failure scenarios — not initial design. That maturity is the payoff of the recurring engagement model and is part of why each subsequent year has been operationally calmer than the one before it.

— The kit —

Current build at Etihad Arena: approximately 90 HPE Aruba WiFi 6E access points across the bowl, concourses, hospitality and back-of-house; a redundant Aruba CX 8325 switching core; an active-passive Palo Alto firewall pair; 25 Gbps fibre uplink to a redundant carrier handoff with diverse paths; a Cisco Catalyst 9500-series production LAN physically segmented from every other network; pre-staged microwave PtP backhaul kit for emergency uplink; CCTV integration with ADMCC-stipulated retention and venue command-centre handoff; and a portable broadcast rack feeding the UFC production crew.

On the security side, a tournament-style SOC for the event window using Palo Alto Cortex XSIAM as the SIEM, with monitoring across all event-network segments, identity, perimeter and CCTV ingestion. The SOC sits inside the same operating window as the NOC, with a unified bridge cadence and shared escalation paths.

— The numbers —

Five consecutive years (2020 through 2025). Dozens of fight cards. Up to 18,000 spectators per event in the Etihad Arena bowl, plus hospitality, back-of-house and broadcast headcount. A peak concurrent device count in the 25,000-plus range across an event night. Sub-2-hour event-day pre-checks across the full network and production LAN, now operationally routine. Zero broadcast-affecting incidents across the full multi-year run on the production LAN. A handful of fan-WiFi P3 and P4 findings across the years, every one of them inside the SLA window for resolution.

— The operational rhythm —

Event week is a five-day cycle. Day minus four: hardware check and config drift scan. Day minus three: production LAN walkthrough with the UFC technical team. Day minus two: full SSID validation, RF re-sweep where venue layout has changed, network-segment penetration sanity check. Day minus one: rehearsal — every category of event-day incident is simulated, from a single AP failure to a primary uplink failover, with the on-site team and the remote NOC running the drill end-to-end. Event day: standard six-hour bridge cadence from doors to broadcast wrap, with the SOC and NOC co-located and dashboard-shared.

Post-event: a 60-minute hot wash within two hours of broadcast wrap, while the team is still on site. A formal post-event report within three working days. Action items into the runbook for the next event. The runbook is what carries the operational learning across the years.

— The hardest moments —

The hardest single moment was a 2021 carrier-side fibre cut affecting the primary uplink three hours before doors. The pre-staged microwave PtP link went up and the event proceeded without anyone outside the engineering team knowing the primary was down. That incident is the reason every UFC event since has had a fully tested PtP backup on the bench, not in a warehouse.

The second was a configuration drift on the production LAN in 2022 — a stale ACL on a switchport interface that had been left over from a non-production test the previous month. It was caught in the pre-event drift scan and remediated 45 minutes before broadcast crew arrived on site. The lesson — every event-day-relevant configuration version-controlled, every drift scan run twice in the lead-up — is now standard practice.

There has been a recurring theme across the five years of small, undramatic issues caught early because the operating discipline runs deep. None of them have surfaced as event-day incidents. That is the deliverable.

— What works —

Continuity. The same team running events year over year compounds operational knowledge in a way that no documentation can substitute for. Engineers who have been in the room before recognise patterns faster, escalate sooner, and execute runbooks more confidently than a freshly assembled team can.

A near-permanent network footprint. The shift from "build it again every event" to "validate it every event" cut event-week labour by roughly 40 percent and dropped the error surface to a fraction of where it was in 2020. The cost model favours the client; the risk profile favours the operation.

A unified NOC and SOC. UFC events are high-profile and globally watched. The threat surface during the event window is real. Hosting the SOC alongside the NOC in the same operating cadence has produced zero broadcast-impacting security events across the five-year run, and a manageable trickle of low-severity findings that have all been contained inside the SLA window.

A long relationship with the venue and the regulator. The five-year run has produced standing operating procedures with the venue management team, with ADMCC for CCTV and security operations and with TDRA for spectrum coordination. New events drop into the existing relationships rather than building them from scratch.

— What we would change for year six —

Move the production LAN switching core to a fully redundant pair of Catalyst 9500-X with full feature parity for traffic engineering on broadcast flows. The current core is adequate; the upgrade buys headroom for future broadcast resolution increases without re-architecting the LAN.

Add a second pre-staged microwave PtP path to a different carrier-handoff location, so the existing primary fibre and the existing PtP backup have a fully independent third option for the rare scenario in which both fail simultaneously.

Expand the SOC scope to include continuous monitoring outside the event window for the venue’s permanent network footprint. The venue is currently dark from a SOC perspective between events; bringing it under continuous monitoring removes a gap in the threat model.

— Why the multi-year run matters —

Most event IT in the region is procured event-by-event, with a different vendor every couple of years. The UFC run has been the opposite, and the operating maturity that produces is visible in the numbers — zero broadcast-affecting incidents across five years, sub-2-hour pre-event validation cycles, manageable low-severity findings handled inside SLA.

For organisers planning a multi-year programme — whether in sports, concerts or recurring conference series — the lesson is that vendor continuity is a strategic decision, not a procurement detail. A vendor running the same operation for the third time is materially different from a vendor running it for the first. The UFC engagement is the most visible example in our portfolio of what that difference looks like in practice.

Key Features

90+ WiFi 6E APs

Etihad Arena bowl, concourses, hospitality and back-of-house — engineered for 25,000+ concurrent devices on event night.

Production LAN

Physically segmented Cisco Catalyst 9500-class network feeding UFC live broadcast — sub-millisecond latency engineering, fully redundant.

Broadcast Uplinks

25 Gbps redundant carrier fibre plus pre-staged microwave PtP backup — diverse paths to multiple carrier handoffs.

Event SOC + NOC

Co-located security and network operations centre running Palo Alto Cortex XSIAM, with unified bridge cadence and shared escalation.

ADMCC-Aligned CCTV

Venue and event CCTV integration with retention, command-centre handoff and ADMCC-stipulated evidence handling.

Recurring-Event Operating Model

Same team, same runbook, same venue relationships year-over-year — operational learning compounded across five years.

Business Benefits

Zero broadcast-affecting incidents
Across five consecutive years of UFC events on the UAE production LAN.
Sub-2-hour pre-event validation
Full network and production LAN drift scan, config validation and rehearsal cycle.
40% event-week labour reduction
Move from build-every-time to validate-every-time cut event-week effort substantially.
Same team, year on year
Engineers on site in 2025 were on site in 2020 — operational continuity is the deliverable.

How It Works

A proven, repeatable delivery approach.

01

Day minus 4–3

Hardware check, config drift scan, production LAN walkthrough with UFC technical team.

02

Day minus 2–1

SSID validation, RF re-sweep, segment sanity check, full event-day failure rehearsal.

03

Event Day

Co-located NOC + SOC, six-hour bridge cadence from doors to broadcast wrap.

04

Post-Event

Hot wash within 2 hours, formal report within 3 working days, runbook updates for next event.

Relevant Industries

Sports & Combat SportsLive Pay-Per-View BroadcastArena Venue OperationsRecurring Event SeriesConcerts & FestivalsVIP Events

Frequently Asked Questions

Have you been the IT partner for every UFC event in the UAE since 2020?

Yes — we have run the IT operating model continuously from the first 2020 events through the most recent 2025 events. Same operating team carrying the institutional knowledge year over year.

What is the difference between event IT for a tournament like FIFA and a recurring series like UFC?

Tournament IT is a one-shot, multi-venue build under a tight window. Recurring-series IT is a near-permanent footprint that gets validated, rehearsed and re-validated for each event. The architectural shape is similar; the operating discipline and the cost model are different. A recurring series rewards continuity.

What separates the production LAN from the fan WiFi?

Entirely separate physical plant — separate switches, separate fibre runs, separate uplinks, separate firewall stack. The production LAN runs SMPTE-class broadcast traffic with sub-millisecond latency engineering and zero tolerance for jitter. They never touch each other anywhere in the stack.

How big is the on-site team for a UFC event?

Typically 8 to 12 engineers across NOC, SOC, wireless, production LAN and CCTV, plus a remote NOC backstopping from Abu Dhabi. The numbers reflect the recurring-engagement maturity — the first events ran with materially larger teams because more had to be built and validated each time.

How is the SOC structured during a UFC event?

Co-located with the NOC in the same operating room. Palo Alto Cortex XSIAM acts as the SIEM with feeds from perimeter, identity, network segments and CCTV. SOC operates on the same bridge cadence as the NOC for the full event window. The model has produced zero broadcast-impacting security incidents across five years.

Can the same model deliver for other recurring series — concerts, sports, awards?

Yes — and we run it for Saadiyat Nights, IIFA Awards, NBA Abu Dhabi Games and other recurring engagements with the same architecture and operating discipline. The model is portable across event types where the venue family is stable.

What is the lead time for engaging us on a similar multi-year programme?

For year one, four to six months is comfortable. From year two onwards the lead time shortens dramatically because most of the work is validation rather than design. Year-on-year programme renewal is a 30 to 45 day re-mobilisation.

Ready to get started?

Talk to our enterprise team for a free consultation and tailored proposal — typically within 48 hours.

Chat with us on WhatsApp