Arc 2 — Business Cases by Sector

Why every major OTA will need to support AI agent bookings

TS
Tom Sato Founding Maintainer, Activity Travel Protocol Contact →

The whiteboard that changed the conversation

The VP of Product and the VP of Engineering are standing in front of the same whiteboard. On one side: shareholders demanding AI-powered activity booking, higher average order values, and a meaningful answer to every competitor that has announced an AI travel assistant in the last eighteen months. On the other side: an engineering team that already runs at capacity, a margin structure that leaves almost no room for new compute spend at scale, and an activity supplier base numbering in the hundreds of thousands — each with its own data format, its own booking API, and its own definition of what 'available' means.

The two executives are not disagreeing. They are staring at the same problem from different angles. Both are right. And both are being asked to solve it before the next board meeting.

Two problems the standard playbook cannot fix

The first problem is compute economics. Building an AI system that assembles complex activity and accommodation itineraries pre-booking is a fundamentally different calculation from the AI deployments that have worked so far. The AI needs to understand traveller intent, query availability across multiple supplier types, check compatibility between components, validate regulatory requirements for each jurisdiction, and present a coherent package — all in response time that a user will actually wait for. Each of those steps is an inference call. At millions of sessions per day, the compute cost is not a rounding error. It is a structural problem.

The second problem is integration complexity. A typical OTA works with hotels in the tens of thousands. Activity suppliers — ski schools, local guides, equipment rental operators, transfer services, cultural experience providers — number in the hundreds of thousands globally, with no common standard, no shared data model, and no consistent API surface. Building AI-powered itinerary assembly on top of that supplier base means either custom integrations for every supplier category in every market, or treating unstructured supplier data as the input layer for AI reasoning — which pushes the complexity and cost directly into the inference layer, where it is most expensive.

Diagram 1 — The N×M integration problem: today
HotelsSki SchoolsTaxi / TransferEquipmentGuides
OTA ACustomCustomCustomCustomCustom
OTA BCustomCustomCustomCustomCustom
OTA CCustomCustomCustomCustomCustom
At real OTA scale — thousands of supplier types, dozens of markets — the integration matrix is unmanageable.
Diagram 2 — With the Activity Travel Protocol: one integration, full network
HotelsSki SchoolsTaxi / TransferEquipmentGuides
OTA AProtocolProtocolProtocolProtocolProtocol
OTA BProtocolProtocolProtocolProtocolProtocol
OTA CProtocolProtocolProtocolProtocolProtocol
Feasibility checking and workflow orchestration run through the protocol's state machine — not through the OTA's AI inference layer.

The protocol answer: shift the complexity to the right layer

The Activity Travel Protocol addresses both problems at their root. On the integration side: every supplier that implements the protocol publishes a structured Capability Declaration. An OTA integrating the protocol does not build a custom integration for each supplier. It connects once to the protocol interface and reads every Capability Declaration through the same structure. The N×M bilateral integration matrix collapses to one.

On the compute side: the protocol uses a formal state machine to manage booking workflow. Availability checking, compatibility validation, regulatory compliance, and disruption handling are all handled deterministically by the state machine. They are not AI inference problems.

Diagram 3 — Where the compute goes
TaskWithout protocol — runs on OTA AIWith protocol — runs on state machine
Interpret traveller intentAI inference callAI inference call
Check supplier availabilityAI queries unstructured data → expensiveProtocol Feasibility Check → deterministic
Validate multi-supplier compatibilityAI reasons across unstructured APIs → very expensiveCapability Declaration schema → deterministic
Manage state across booking lifecycleAI monitors every transition → continuous costProtocol state machine → zero AI cost
Handle disruption at scaleAI evaluates each case → cost × volumeDisruption Event Declaration → rule-based
AI inference costs fall to the minimum necessary. Workflow correctness is guaranteed by protocol rules, not by AI reasoning.
The AI interprets what the traveller wants. The protocol handles everything that happens next.

What changes at the moment things go wrong

The compute and integration arguments hold at steady state. They become decisive under disruption. When a typhoon grounds flights across a region, or a ski resort closes unexpectedly mid-season, an OTA without a protocol-level disruption model faces a manual coordination problem at scale. The Activity Travel Protocol defines a Disruption Event Declaration: a structured signal that propagates through every affected Booking Object simultaneously, triggering the defined response protocol for each component. Disruption handling that scales with the event, not against it.

The window, and what it closes

At least one major OTA platform has a proprietary activity booking API in pilot. If that API reaches general availability and becomes the de facto standard for activity supplier connections, OTAs that did not build early will face a choice: integrate the proprietary API and accept the terms that come with it, or build bilateral integrations supplier by supplier at growing cost. That window is estimated at twelve to eighteen months.

The open alternative is the Activity Travel Protocol. Apache 2.0 licence. No single owner. Governance structure that allows any OTA to participate in the standard, not just consume it. The protocol is designed so that the OTAs that adopt it early shape it — not the other way around.

For the VP of Product: the answer to the shareholder question is not a bespoke AI system built on top of unstructured supplier data. It is a protocol that makes the supplier base structured, queryable, and bookable through a single integration. For the VP of Engineering: the answer to the compute cost question is putting the AI in the right place in the stack, and letting the protocol handle the rest. The whiteboard does not have to stay unsolved.