Building your first Activity Travel Protocol integration
The Activity Travel Protocol publishes its full specification at activitytravel.pro under Apache 2.0. The protocol is open. The question for a developer evaluating it is not whether they can access it — it is how they actually connect a system to it, and what they get when they do.
This post describes the three integration surfaces, what each one is optimised for, and what a first integration looks like at each tier.
three surfaces, one protocol
Most API standards offer one integration path: implement the API. The Activity Travel Protocol offers three surfaces, because the systems that need to integrate with travel bookings are architecturally very different from each other.
A hotel property management system written in Java fifteen years ago has different integration needs from a new AI travel agent built on a modern LLM stack. A solo developer building an activity discovery tool for a regional tourism board has different needs from an enterprise OTA deploying a managed hyperscale infrastructure. The protocol is designed to be the same protocol for all of them — but it would be impractical to require all of them to use the same integration path.
what each surface is
Surface 1: REST/Webhook. This is the primary surface for existing software systems. An OpenAPI 3.1 specification is the normative contract. Any system that can make HTTP calls and receive webhooks can integrate with the Activity Travel Protocol through this surface. Java and Python adopters — the enterprise OTA, the established tour operator, the property management system — use this surface at v1.0. The REST surface does not require an understanding of AI agent architecture or MCP. It requires an HTTP client and a database.
Surface 2: ATP MCP Server (@atp/mcp-server). This is the primary surface for AI agent integration. The MCP Server exposes the Booking Object lifecycle as a set of eight tools callable by any MCP-compatible AI agent. It handles authentication, mandate enforcement, and state machine validation — the AI agent does not need to implement any of this logic itself. A developer building an AI travel agent, a concierge chatbot, or a disruption management assistant uses this surface. The MCP Server runs at three deployment tiers: local stdio (development), Streamable HTTP with a sidecar container (production), and fully managed on Nvidia AI Grid infrastructure (hyperscale).
Surface 3: @atp/llms-tooling (Prompt Library). This surface is for developers integrating LLMs with the protocol. It provides the system prompt templates that tell an AI agent who it is, what it is permitted to do, and how to reason about its Booking Object. The Prompt Library works in combination with the MCP Server — the server handles the protocol mechanics, the Prompt Library handles the LLM persona and instruction set.
what a first integration looks like
Scenario A: Existing software, REST surface.
A regional activity aggregator wants their catalogue to be discoverable by ATP-compatible booking systems. Their existing platform is a Python web application with a PostgreSQL database.
First integration: implement the Capability Declaration endpoint. This is a single GET endpoint that returns structured data about each activity in their catalogue — what it is, when it is available, what it requires from participants, what its commercial terms are. The OpenAPI 3.1 spec defines the schema. A developer familiar with Python REST APIs can implement a read-only Capability Declaration endpoint in a day. Once live, every ATP-compatible booking system can discover and query their activity catalogue.
The second integration step is the webhook receiver: an endpoint that receives Booking Object state transition events for bookings that include the aggregator's activities. This is how the aggregator knows when a booking is confirmed, when a disruption is declared, when a cancellation is processed.
No AI infrastructure required. No new technology choices. The aggregator's existing stack handles both integrations.
Scenario B: AI agent, MCP Server surface.
A boutique hotel wants to deploy an AI concierge that helps guests plan their stay — answering questions about their booking, collecting pre-trip information (dietary requirements, equipment sizes), and notifying them about logistics before arrival.
First integration: deploy the @atp/mcp-server package at Tier 1 (local stdio, for development). Connect an LLM to it using the Guest Agent persona template from @atp/llms-tooling. The LLM now has access to eight MCP tools that give it read and write access to the Booking Object — subject to the mandate scope issued at session initialisation.
A developer with Node.js experience and familiarity with MCP client architecture can run a working Guest Agent against a test Booking Object in an afternoon. The mandate enforcement, the Cedar policy evaluation, the NeMo Guardrails rails — these are handled by the server infrastructure, not by the developer.
Moving to Tier 2 (production, Streamable HTTP with authentication) adds OAuth 2.1 client credentials and the NeMo Guardrails sidecar container. The application logic does not change. The deployment configuration does.
Scenario C: Discovery tool, Prompt Library + REST.
A tourism board wants a consumer-facing activity discovery tool that recommends experiences in their region. They are not running a booking system — they want to surface what is available and let travellers decide.
First integration: call atp_search_activities through the MCP Server at read-only scope (BOOKING_READ only), or call the equivalent REST endpoint. The Discovery Agent persona template from @atp/llms-tooling provides the LLM instruction set for presenting search results correctly — including how to surface pre-arrangement requirements, how to flag eligibility constraints, and how to handle OCTO Bridge results alongside ATP-native activities.
This is a read-only integration. No booking state is modified. No mandate beyond BOOKING_READ is required.
the SDK package structure
All three surfaces are served by a ten-package SDK:
@atp/core— shared types, Booking Object schema, state machine@atp/mcp-server— MCP Server implementation for Surfaces 2 and 3@atp/rest-api— REST/Webhook surface for Surface 1@atp/llms-tooling— Prompt Library (persona templates, Windley context template)@atp/bridge-octo— OCTO Bridge adapter for connecting OCTO-compliant activity providers@atp/dev-tools— local development environment, test Booking Objects, mock mandate issuer@atp/interop-tests— interoperability test suite used for ATP-compatible certification
The SDK is TypeScript-first with a Python analytics sidecar for data pipeline engineers. A Java client library follows OpenAPI Generator from the REST API spec at v1.1.
The right first step for any integration is @atp/dev-tools: a local environment that runs a complete ATP stack — Security Kernel, Booking Object database, MCP Server, mock mandate issuer — in a single Docker Compose file. No cloud account required. A working local ATP environment in under ten minutes.
Read more in the spec: activitytravel.pro/layer2/ — discovery and capability surfaces | activitytravel.pro/layer3/ — Booking Object and MCP tools