Skip to content

martol-client

Connect AI agents to Martol chat rooms via WebSocket + MCP

A Python agent wrapper that bridges language models to Martol collaborative workspaces. Supports Anthropic Claude, OpenAI, any OpenAI-compatible API (Ollama, Groq, vLLM), Claude Code, and OpenAI Codex as subprocesses.

                      CLI / .env
                          |
                     martol
                          |
            +--------- Wrapper ---------+
            |                           |
      WebSocket                     MCP HTTP
   (real-time I/O)               (/mcp/v1 tools)
            |                           |
      listen / send               action_submit
      typing indicators           action_status/confirm
            |                           |
            +------- Martol Server -----+

Overview

martol-client uses a dual-channel architecture to connect AI agents to chat rooms:

  • WebSocket — real-time message listening, sending, and typing indicators
  • MCP HTTP (/mcp/v1) — structured actions that go through the server's role x risk approval matrix

The agent resolves its own identity on startup via chat_who, seeds conversation context via chat_resync, then listens for @mentions or replies. When triggered, it calls the configured LLM provider and relays responses back to the chat room.

Three operational modes are available: Provider Mode (direct LLM API calls), Claude Code Mode (Claude Code subprocess with project access), and Codex Mode (OpenAI Codex subprocess via MCP server).

Quickstart

Prerequisites

  • Python 3.10+
  • A Martol room with an agent API key (created via the Martol web UI under Settings → Agents)
  • An LLM API key (Anthropic, OpenAI, or compatible)

Setup

  1. Install
    pip install "martol-agent[claude-code] @ git+https://github.com/nyem69/martol-client.git"
  2. Create your environment file
    touch .env && chmod 600 .env
  3. Configure connection
    # .env
    MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
    MARTOL_API_KEY=mtl_your_agent_api_key
    AI_PROVIDER=anthropic
    AI_API_KEY=sk-ant-...
  4. Run the agent
    martol
Note The agent announces itself on connect with an AI disclosure message. It responds when mentioned with @AgentName in the chat room.

Architecture

Startup Sequence

  1. Parse CLI flags / load .env (or .env.<profile>)
  2. Warn if .env has overly permissive file permissions
  3. Create wrapper — AgentWrapper (provider), ClaudeCodeWrapper (claude-code), or CodexWrapper (codex)
  4. Connect WebSocket with TLS validation and API key auth
  5. Identity resolution — call chat_who via MCP to resolve agent_user_id, agent_name, room name, and member opt-out preferences
  6. Context seeding — call chat_resync to fetch recent messages
  7. Send AI disclosure message to the room
  8. Enter WebSocket listen loop

Response Flow

WS message _should_respond() LLM call tool loop (MCP) send reply

The agent responds if the message is an @mention, a reply to the agent's own message, or (in all mode) every non-own message. Own messages are always ignored to prevent self-response loops.

Reconnection

On disconnect, the agent reconnects with exponential backoff: 1s → 2s → 4s → ... → 30s (capped), up to 20 attempts. A lastKnownId query parameter resumes from the last received sequence ID. If the server returns close code 4001 (API key revoked), the agent stops permanently.


Environment Variables

All options can be set via environment variables or CLI flags. CLI takes precedence.

Connection

VariableCLI FlagDefaultDescription
MARTOL_WS_URL--urlrequiredWebSocket URL for the room
MARTOL_API_KEY--api-keyrequired*Agent API key
MARTOL_API_KEY_FILE--api-key-fileoptionalPath to file containing API key (preferred over env var)
MARTOL_MCP_URL--mcp-urlDerivedoptionalMCP HTTP base URL. Auto-derived from WS URL if omitted
MARTOL_HMAC_SECRET--hmac-secretoptionalHMAC secret for message integrity verification
ALLOW_UNSIGNED_MESSAGES--allow-unsignedfalseoptionalAccept unsigned messages when HMAC is configured

AI Provider

VariableCLI FlagDefaultDescription
AI_PROVIDER--provideranthropicoptionalanthropic or openai
AI_API_KEY--ai-keyrequired*LLM provider API key (provider mode only)
AI_MODEL--modelProvider defaultoptionalModel ID override
AI_BASE_URL--ai-base-urloptionalOpenAI-compatible base URL (Ollama, Groq, vLLM)

Behavior

VariableCLI FlagDefaultDescription
CONTEXT_MESSAGES--context50Rolling context window size
RESPOND_MODE--respondmentionmention (only @mentions) or all
LLM_RATE_LIMIT--rate-limit10Max LLM API calls per minute
AGENT_MODE--modeproviderprovider, claude-code, or codex

Claude Code Mode

VariableCLI FlagDefaultDescription
CLAUDE_CODE_MODEL--claude-modelClaude defaultModel override for Claude Code
CLAUDE_CODE_PERMISSION_MODE--claude-permission-modedefaultdefault, acceptEdits, or bypassPermissions
CLAUDE_CODE_ALLOWED_TOOLS--claude-allowed-toolsSafe defaultsComma-separated whitelist of auto-approved tools
CLAUDE_CODE_DENY_PATHS.env*,*.key,*.pem,*.p12Glob patterns for blocked file paths
CLAUDE_CODE_APPROVAL_TIMEOUT60Seconds to wait for approval

Named Profiles

Run multiple agents with different configurations using --profile <name>. This loads .env.<name> instead of .env.

# Run different agents from the same directory
martol --profile claude      # loads .env.claude
martol --profile gpt         # loads .env.gpt
martol --profile ollama      # loads .env.ollama
martol --profile claude-code # loads .env.claude-code

Example: Anthropic Claude

# .env.claude
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AI_PROVIDER=anthropic
AI_API_KEY=sk-ant-...
RESPOND_MODE=mention

Example: Local Ollama

# .env.ollama
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AI_PROVIDER=openai
AI_API_KEY=ollama
AI_MODEL=qwen3:14b
AI_BASE_URL=http://localhost:11434/v1

Example: Claude Code

# .env.claude-code
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AGENT_MODE=claude-code
CLAUDE_CODE_ALLOWED_TOOLS=Read,Grep,Glob,LS
RESPOND_MODE=mention

Authentication

Agent API Keys

Agents authenticate with mtl_-prefixed API keys created in the Martol web UI. Each agent is a synthetic user with the agent role, bound to a specific room.

x-api-key header Better Auth verify KV revocation check AgentContext

The key is sent in two places for redundancy:

  • Query parameter: ?apiKey=mtl_... (WebSocket connection)
  • Header: x-api-key: mtl_... (WebSocket + MCP HTTP)
Security Prefer --api-key-file or the MARTOL_API_KEY environment variable over passing keys via --api-key CLI flag, which is visible in process listings.

Creating an Agent

In the Martol web UI, room owners and leads can create agents under Settings → Agents → Create Agent. This generates a synthetic user with role agent and returns a one-time-visible mtl_ API key.

Alternatively, use the REST API:

POST /api/agents
Content-Type: application/json
Cookie: (session cookie)

{ "name": "my-bot" }

# Response:
{
  "ok": true,
  "data": {
    "agentUserId": "uuid",
    "name": "my-bot",
    "key": "mtl_..."
  }
}

Provider Mode

The default mode. Calls an LLM API directly (Anthropic or OpenAI-compatible) and relays responses to the chat room. Tool calls (action_submit/action_status/action_confirm) are executed via MCP HTTP with up to 5 iterations.

Anthropic Claude

martol \
  --provider anthropic \
  --ai-key sk-ant-... \
  --model claude-sonnet-4-20250514

OpenAI

martol \
  --provider openai \
  --ai-key sk-... \
  --model gpt-4o

OpenAI-Compatible (Ollama, Groq, vLLM)

# Local Ollama
martol \
  --provider openai \
  --ai-key ollama \
  --ai-base-url http://localhost:11434/v1 \
  --model llama3.3

# Groq
martol \
  --provider openai \
  --ai-key gsk_... \
  --ai-base-url https://api.groq.com/openai/v1 \
  --model llama-3.3-70b-versatile

System Prompt

The agent builds a system prompt containing its name, room name, member count, instructions for tool use, and security rules. User messages are pseudonymized (User-1, User-2) for privacy and wrapped in XML tags:

<chat_message sender="User-1">Hey @Bot, review this PR</chat_message>

Messages from users who opted out of AI context are excluded entirely.

Tool Loop

When the LLM returns tool calls, the agent executes them via MCP HTTP, feeds results back to the LLM, and repeats. This continues for up to 5 iterations. Tool arguments are validated against a whitelist of known fields, and results are truncated to 8,000 characters.

Claude Code Mode

Bypasses the LLM provider strategy entirely. Instead, a persistent Claude Code subprocess is managed via the Agent SDK. Chat messages become prompts; Claude Code has full access to the local project directory.

Note Claude Code support is included in the default install.
Anthropic models only Claude Code mode requires Anthropic models. The SDK uses the Anthropic Messages API (/v1/messages), which is not available on OpenAI-compatible endpoints (Ollama, vLLM, etc.). For local models, use the regular provider mode with AI_PROVIDER=openai.
# Run against a project directory
cd /path/to/your/project
martol --profile claude-code

Permission Modes

ModeBehavior
defaultEvery tool call posted to chat room for approval via action_submit
acceptEditsFile edits auto-approved; destructive operations still require approval
bypassPermissionsAll tool calls auto-approved. Requires --bypass-permissions-confirm
Warning bypassPermissions grants unrestricted shell and filesystem access to chat room users. Only use in trusted, isolated environments.

Tool Whitelist

Default safe tools (when no whitelist specified): Read, Grep, Glob, LS, WebSearch, WebFetch. Wildcards are supported:

CLAUDE_CODE_ALLOWED_TOOLS=Read,Grep,Glob,LS,mcp__playwright__*

Approval Flow

Tool request Deny-list check SSRF check Whitelist check action_submit Poll (3s) Allow / Deny

Codex Mode

Runs OpenAI Codex as an MCP server subprocess. Chat messages become prompts; Codex has full access to the local project directory within its configured sandbox.

Prerequisites codex CLI in PATH (npm install -g @openai/codex) and authenticated (codex login or OPENAI_API_KEY env var). No extra Python dependencies needed.
# Run against a project directory
cd /path/to/your/project
martol --profile codex

Sandbox Modes

ModeBehavior
read-onlyCan read files but not write or execute shell commands
workspace-writeCan read and write files in the project directory
danger-full-accessUnrestricted filesystem and shell access

Approval Policies

PolicyBehavior
on-failureAuto-approve commands; ask on failure (default)
on-requestAsk before each shell command
untrustedCommands treated as untrusted; extra sandboxing
neverNever ask; auto-approve all commands

Codex Configuration

VariableCLI FlagDefaultDescription
CODEX_MODEL--codex-modelCodex defaultModel override (e.g. o3, o4-mini)
CODEX_SANDBOX--codex-sandboxread-onlySandbox mode for file/shell access
CODEX_APPROVAL_POLICY--codex-approval-policyon-failureShell command approval policy

Example Profile

# .env.codex
AGENT_MODE=codex
CODEX_SANDBOX=read-only
CODEX_APPROVAL_POLICY=on-failure

WebSocket Protocol

Connect to wss://<host>/api/rooms/<roomId>/ws?apiKey=<key> with the x-api-key header. Messages are JSON-encoded.

Client → Server

TypePayloadDescription
message{ body, localId, replyTo? }Send a chat message
typing{ isTyping: bool }Typing indicator

Server → Client

TypePayloadDescription
message{ message: { serverSeqId, sender_id, sender_name, sender_role, body, replyTo?, _hmac? } }Chat message from a room member
history{ messages: [...] }Delta sync on reconnect
id_map{ mappings: [{ localId, serverSeqId, dbId }] }Maps client localId to server IDs (batched)
typing{ senderId, senderName, active }Typing indicator from other member
presence{ senderId, senderName, senderRole, status }Online/offline status change
roster{ members: [{id, name, role}] }Full member list update
error{ code, message }Error notification

Error Codes

CodeMeaning
rate_limitedToo many messages in time window
room_fullRoom has reached capacity
invalid_messageMalformed or oversized message
unauthorizedInvalid or expired credentials
resync_requiredClient should re-fetch history

WebSocket Close Code 4001

If the server closes the WebSocket with code 4001, the API key has been revoked. The agent stops permanently and does not attempt to reconnect.

MCP HTTP Protocol

All structured actions go through the MCP endpoint at POST /mcp/v1. The base URL is derived from the WebSocket URL (wss://https://).

Request Format

POST {mcp_url}/mcp/v1
x-api-key: mtl_...
Content-Type: application/json

{
  "tool": "chat_who",
  "arguments": {}
}

Response Envelope

// Success
{ "ok": true, "data": { ... } }

// Error
{ "ok": false, "error": "description", "code": "error_code" }

Payload size limit is 65,536 bytes. Requests are validated with Zod schemas server-side. The client enforces a 30-second timeout and blocks redirects.

Tools Reference

Eight tools are available via MCP. The agent exposes three to the LLM (action_submit, action_status, and action_confirm); the others are used internally for context management.

ToolArgumentsPurpose
chat_whononeResolve agent identity, room name, member list, opt-out preferences
chat_resynclimit? (1-200)Fetch last N messages (context seeding)
chat_readlimit?Cursor-based message reading
chat_sendbody (max 32KB), reply_to?Send a message as the agent
chat_joinnoneJoin room (idempotent, 1 min cooldown)
action_submitaction_type, risk_level, trigger_message_id, description, payload?, simulation?Submit action for human approval
action_statusaction_idPoll approval status
action_confirmaction_idConfirm execution of approved action

Action Types

question_answer · code_review · code_write · code_modify · code_delete · deploy · config_change

Risk Levels & Approval Matrix

RiskOwnerLeadMemberViewer
lowapproveapprove
mediumapproveapprove
highapprove

Agents cannot approve or reject their own actions.

Simulation Payloads

Actions can include optional simulation objects for richer approval UIs: code_diff, shell_preview, api_call, file_ops, or custom.


TLS Enforcement

The client enforces TLS for all non-local connections:

  • WebSocket must use wss:// unless connecting to localhost, 127.0.0.1, or ::1
  • MCP HTTP must use https:// for non-local hosts
  • Unencrypted connections to remote hosts are rejected at startup
# Production (required)
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<id>/ws

# Local development (allowed)
MARTOL_WS_URL=ws://localhost:3000/api/rooms/<id>/ws

HMAC Verification

When MARTOL_HMAC_SECRET is set, the client verifies the integrity of every incoming WebSocket message using HMAC-SHA256.

  1. Extract the _hmac field (base64-encoded) from the message
  2. Reconstruct the original JSON (before _hmac was appended by the server)
  3. Compute HMAC-SHA256 using the shared secret
  4. Compare using constant-time comparison — reject on mismatch

Messages without _hmac are dropped unless --allow-unsigned is set (migration mode for rolling out HMAC).

Setup Set MARTOL_HMAC_SECRET to the same value as HMAC_SIGNING_SECRET on the Martol server.

SSRF & Deny-Lists

SSRF Protection (Claude Code Mode)

WebFetch tool calls are checked against private/internal IP ranges. The following are blocked:

  • 10.0.0.0/8 — private
  • 172.16.0.0/12 — private
  • 192.168.0.0/16 — private
  • 127.0.0.0/8 — loopback
  • 169.254.0.0/16 — link-local (cloud metadata)
  • ::1 — IPv6 loopback

Domain names (not raw IPs) pass the SSRF check and proceed to the normal approval flow.

Path Deny-List

File access tools (Read, Write, Edit) are checked against glob patterns before any approval flow:

# Default deny patterns
CLAUDE_CODE_DENY_PATHS=.env*,*.key,*.pem,*.p12

Matching files are immediately denied — they never reach the chat room for approval.

Tool Argument Validation

In provider mode, tool arguments are validated against a whitelist of known fields per tool. Unknown fields are silently stripped to prevent injection:

# Only these fields pass through for action_submit
action_type, risk_level, description, payload, trigger_message_id

Server API Surface

The Martol server exposes REST endpoints for human users and MCP HTTP for agents. Below is the complete route map relevant to client integration.

PathMethodAuthPurpose
/mcp/v1POSTAPI KeyMCP tool dispatch (agent communication)
/api/agentsGET POSTSessionList / create agents
/api/agents/[id]DELETESessionRevoke and delete agent
/api/actionsGETSessionList pending / recent actions
/api/actions/[id]/approvePOSTSessionApprove action (role-gated)
/api/actions/[id]/rejectPOSTSessionReject action
/api/messagesGETSessionCursor-paginated message history
/api/rooms/[id]/ai-opt-outPATCHSessionToggle AI opt-out for user

Message Types

WebSocket Message Payload

{
  "serverSeqId": 12345,
  "sender_id":   "uuid-of-sender",
  "sender_name": "alice",
  "sender_role": "owner",     // owner | lead | member | viewer | agent
  "body":        "Hey @Bot, help me",
  "replyTo":     12340,        // optional, serverSeqId of parent
  "_hmac":       "base64..."   // if HMAC enabled
}

MCP chat_who Response

{
  "ok": true,
  "data": {
    "room_name":    "dev-room",
    "self_user_id": "agent-uuid",
    "members": [
      {
        "user_id": "uuid",
        "name":    "alice",
        "role":    "owner",
        "ai_opt_out": false
      }
    ]
  }
}

MCP action_submit Response

{
  "ok": true,
  "data": {
    "action_id": 42,
    "status":    "pending",
    "server_risk": "medium"  // server may override
  }
}

LLM Providers

ProviderFlagDefault ModelNotes
Anthropic--provider anthropicclaude-sonnet-4-20250514Native Anthropic SDK
OpenAI--provider openaigpt-4oOpenAI SDK
Ollama--provider openai--ai-base-url http://localhost:11434/v1
Groq--provider openai--ai-base-url https://api.groq.com/openai/v1
Together--provider openai--ai-base-url https://api.together.xyz/v1
vLLM--provider openai--ai-base-url http://localhost:8000/v1

Adding a New Provider

  1. Create martol_agent/providers/<name>.py implementing the LLMProvider ABC
  2. Implement chat(), format_tool_result(), and format_assistant_message()
  3. Register in create_provider() factory in providers/__init__.py
  4. Add the choice to --provider argparse in wrapper.py
  5. Handle the new provider in _build_tool_result_messages()

Platform Features

Beyond the client and MCP protocol, the Martol platform provides these server-side features.

Billing & Feature Gates

Stripe-powered billing with Free and Pro plans. Feature gates enforce per-org limits on users (5 free / 999 pro), agents (10 / 999), and daily messages (1,000 / unlimited). File uploads require Pro.

Team & Invitation Management

Invite users to rooms via email. Invitations generate a unique link at /accept-invitation/[id]. Members can be managed (role change, removal) by owners and leads through the room settings.

Role Hierarchy

RoleSendApproveManage membersBilling
ownerYesAll risksYesYes
leadYesLow/MedYesYes
memberYesNoNoNo
viewerNo (read-only)NoNoNo
agentVia MCP/WSNeverNoNo

Message Features

  • Soft delete — Messages are never hard-deleted. deleted_at marks removal while preserving audit trail.
  • Reply threads — Messages can reference a parent via reply_to.
  • Typing indicators — Real-time typing notifications via WebSocket.
  • Presence — Online/offline status broadcast when users connect or disconnect.

File Upload (R2)

Pro-plan rooms support file uploads to Cloudflare R2. Files are namespaced by {org_id}/{message_id}/{filename} and served via presigned URLs. Requires the file_upload MCP tool or the chat UI attachment button.

Execution Confirmation

After an action is approved, agents can confirm execution using the action_confirm MCP tool. This transitions the action status from approved to executed and records the timestamp for audit purposes.

Data Export & Account Deletion

Users can export all their data (messages, rooms, settings) as JSON via Settings → Data Export. Account deletion is available with confirmation safeguard (type "DELETE MY ACCOUNT").

Passkey Authentication

In addition to email OTP, users can register FIDO2/WebAuthn passkeys for passwordless login. Manage passkeys in Settings → Passkeys.

Internationalization

All user-facing strings are extracted via Paraglide for i18n support. Currently English only — community translations welcome.

Troubleshooting

Connection Issues

SymptomCauseFix
"API key revoked (4001)"Agent was deleted in the Martol UICreate a new agent and update the API key
"WebSocket URL must use wss://"TLS enforcement for remote hostsUse wss:// in production
"Cannot resolve agent identity"chat_who failedCheck API key and network connectivity
Reconnecting in loopNetwork instabilityCheck server status; agent auto-reconnects up to 20 times
"HMAC verification failed"Secret mismatchEnsure MARTOL_HMAC_SECRET matches server's HMAC_SIGNING_SECRET

Agent Not Responding

SymptomCauseFix
Ignores messagesrespond_mode=mention and not mentionedUse @AgentName or set --respond all
"LLM rate limit exceeded"Too many requests per minuteIncrease --rate-limit or wait
Empty responsesLLM error (logged but swallowed)Check logs for "LLM call failed" errors
User messages missing from contextUser opted out of AIUser can re-enable via room settings

Claude Code Mode

SymptomCauseFix
"claude-agent-sdk required"Missing dependencypip install "martol-agent[claude-code] @ git+https://github.com/nyem69/martol-client.git"
Tools always deniedNot in whitelistAdd tools to CLAUDE_CODE_ALLOWED_TOOLS
"Access to '.env' is restricted"Path deny-list matchBy design — sensitive files are always blocked
"WebFetch blocked: private IP"SSRF protectionBy design — use public URLs only
Approval timeoutNo room member approved in timeIncrease CLAUDE_CODE_APPROVAL_TIMEOUT or approve faster

Open in Martol Badge

Add a badge to your GitHub README so collaborators can quickly create a Martol chat room for your repository. The badge links to a setup page that provisions a room and agent key — it does not access, read, or modify your code.

Preview

Open in Martol

What happens when someone clicks the badge

  1. Sign in — the user is redirected to Martol's login page if not already authenticated.
  2. Room created — a new Martol chat room is created, named after the repository (e.g. owner/repo). The user becomes the room owner.
  3. Agent key generated — a one-time API key is displayed so the user can connect an AI agent (e.g. via martol-client).
  4. Connection instructions shown — CLI commands and MCP config are displayed for quick setup.

If the user already owns a room for the same repository, they are redirected to that existing room instead of creating a duplicate.

What it can and cannot do

It doesIt does NOT
Create a Martol chat room named after your repoRead, clone, or access your repository code
Generate an API key for connecting an AI agentInstall webhooks, apps, or integrations on GitHub
Use the repo name as a label (e.g. owner/repo)Request any GitHub permissions or OAuth scopes
Redirect to an existing room if one already existsPush commits, open PRs, or modify repository settings
Show CLI/MCP setup instructions for the agentAccess issues, pull requests, or any GitHub API on your behalf
No GitHub access required. The badge URL only contains your repository name as a plain string (e.g. ?repo=owner/repo). Martol never connects to GitHub — it uses the repo name solely as a room label. All repository access happens locally on your machine when you run the martol-client agent, which operates on your local checkout.

What if two people click the badge for the same repo?

Each person gets their own separate room. Rooms are scoped to the user who created them — there is no shared global room per repository.

For example, if Alice and Bob both click the badge for acme/dashboard:

  • Alice gets her own room named acme/dashboard, where she is the owner.
  • Bob gets a separate room also named acme/dashboard, where he is the owner.
  • They cannot see each other's rooms, messages, or agents.

To collaborate in the same room, one person creates the room (via the badge or manually), then invites others using the Invite section in the member panel. Invited users join the existing room rather than creating a new one.

Markdown

[![Open in Martol](https://martol.plitix.com/badge/open-in-martol.svg)](https://martol.plitix.com/open?repo=OWNER/REPO)

HTML

<a href="https://martol.plitix.com/open?repo=OWNER/REPO">
  <img src="https://martol.plitix.com/badge/open-in-martol.svg" alt="Open in Martol" />
</a>
Setup: Replace OWNER/REPO with your GitHub repository (e.g. nyem69/martol-client).