The whiteboard that changed the conversation
The VP of Product and the VP of Engineering are standing in front of the same whiteboard. On one side: shareholders demanding AI-powered activity booking, higher average order values, and a meaningful answer to every competitor that has announced an AI travel assistant in the last eighteen months. On the other side: an engineering team that already runs at capacity, a margin structure that leaves almost no room for new compute spend at scale, and an activity supplier base numbering in the hundreds of thousands — each with its own data format, its own booking API, and its own definition of what 'available' means.
The two executives are not disagreeing. They are staring at the same problem from different angles. Both are right. And both are being asked to solve it before the next board meeting.
Two problems the standard playbook cannot fix
The first problem is compute economics. Building an AI system that assembles complex activity and accommodation itineraries pre-booking is a fundamentally different calculation from the AI deployments that have worked so far. The AI needs to understand traveller intent, query availability across multiple supplier types, check compatibility between components, validate regulatory requirements for each jurisdiction, and present a coherent package — all in response time that a user will actually wait for. Each of those steps is an inference call. At millions of sessions per day, the compute cost is not a rounding error. It is a structural problem.
The second problem is integration complexity. A typical OTA works with hotels in the tens of thousands. Activity suppliers — ski schools, local guides, equipment rental operators, transfer services, cultural experience providers — number in the hundreds of thousands globally, with no common standard, no shared data model, and no consistent API surface. Building AI-powered itinerary assembly on top of that supplier base means either custom integrations for every supplier category in every market, or treating unstructured supplier data as the input layer for AI reasoning — which pushes the complexity and cost directly into the inference layer, where it is most expensive.
| Hotels | Ski Schools | Taxi / Transfer | Equipment | Guides | |
|---|---|---|---|---|---|
| OTA A | Custom | Custom | Custom | Custom | Custom |
| OTA B | Custom | Custom | Custom | Custom | Custom |
| OTA C | Custom | Custom | Custom | Custom | Custom |
| Hotels | Ski Schools | Taxi / Transfer | Equipment | Guides | |
|---|---|---|---|---|---|
| OTA A | Protocol | Protocol | Protocol | Protocol | Protocol |
| OTA B | Protocol | Protocol | Protocol | Protocol | Protocol |
| OTA C | Protocol | Protocol | Protocol | Protocol | Protocol |
The protocol answer: shift the complexity to the right layer
The Activity Travel Protocol addresses both problems at their root. On the integration side: every supplier that implements the protocol publishes a structured Capability Declaration. An OTA integrating the protocol does not build a custom integration for each supplier. It connects once to the protocol interface and reads every Capability Declaration through the same structure. The N×M bilateral integration matrix collapses to one.
On the compute side: the protocol uses a formal state machine to manage booking workflow. Availability checking, compatibility validation, regulatory compliance, and disruption handling are all handled deterministically by the state machine. They are not AI inference problems.
| Task | Without protocol — runs on OTA AI | With protocol — runs on state machine |
|---|---|---|
| Interpret traveller intent | AI inference call | AI inference call |
| Check supplier availability | AI queries unstructured data → expensive | Protocol Feasibility Check → deterministic |
| Validate multi-supplier compatibility | AI reasons across unstructured APIs → very expensive | Capability Declaration schema → deterministic |
| Manage state across booking lifecycle | AI monitors every transition → continuous cost | Protocol state machine → zero AI cost |
| Handle disruption at scale | AI evaluates each case → cost × volume | Disruption Event Declaration → rule-based |
What changes at the moment things go wrong
The compute and integration arguments hold at steady state. They become decisive under disruption. When a typhoon grounds flights across a region, or a ski resort closes unexpectedly mid-season, an OTA without a protocol-level disruption model faces a manual coordination problem at scale. The Activity Travel Protocol defines a Disruption Event Declaration: a structured signal that propagates through every affected Booking Object simultaneously, triggering the defined response protocol for each component. Disruption handling that scales with the event, not against it.
The window, and what it closes
At least one major OTA platform has a proprietary activity booking API in pilot. If that API reaches general availability and becomes the de facto standard for activity supplier connections, OTAs that did not build early will face a choice: integrate the proprietary API and accept the terms that come with it, or build bilateral integrations supplier by supplier at growing cost. That window is estimated at twelve to eighteen months.
The open alternative is the Activity Travel Protocol. Apache 2.0 licence. No single owner. Governance structure that allows any OTA to participate in the standard, not just consume it. The protocol is designed so that the OTAs that adopt it early shape it — not the other way around.
For the VP of Product: the answer to the shareholder question is not a bespoke AI system built on top of unstructured supplier data. It is a protocol that makes the supplier base structured, queryable, and bookable through a single integration. For the VP of Engineering: the answer to the compute cost question is putting the AI in the right place in the stack, and letting the protocol handle the rest. The whiteboard does not have to stay unsolved.