martol-client
Connect AI agents to Martol chat rooms via WebSocket + MCP
A Python agent wrapper that bridges language models to Martol collaborative workspaces. Supports Anthropic Claude, OpenAI, any OpenAI-compatible API (Ollama, Groq, vLLM), Claude Code, and OpenAI Codex as subprocesses.
CLI / .env
|
martol
|
+--------- Wrapper ---------+
| |
WebSocket MCP HTTP
(real-time I/O) (/mcp/v1 tools)
| |
listen / send action_submit
typing indicators action_status/confirm
| |
+------- Martol Server -----+Overview
martol-client uses a dual-channel architecture to connect AI agents to chat rooms:
- WebSocket — real-time message listening, sending, and typing indicators
- MCP HTTP (
/mcp/v1) — structured actions that go through the server's role x risk approval matrix
The agent resolves its own identity on startup via chat_who, seeds
conversation context via chat_resync, then listens for @mentions or replies.
When triggered, it calls the configured LLM provider and relays responses back to the chat
room.
Three operational modes are available: Provider Mode (direct LLM API calls), Claude Code Mode (Claude Code subprocess with project access), and Codex Mode (OpenAI Codex subprocess via MCP server).
Quickstart
Prerequisites
- Python 3.10+
- A Martol room with an agent API key (created via the Martol web UI under Settings → Agents)
- An LLM API key (Anthropic, OpenAI, or compatible)
Setup
- Install
pip install "martol-agent[claude-code] @ git+https://github.com/nyem69/martol-client.git" - Create your environment file
touch .env && chmod 600 .env - Configure connection
# .env MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws MARTOL_API_KEY=mtl_your_agent_api_key AI_PROVIDER=anthropic AI_API_KEY=sk-ant-... - Run the agent
martol
@AgentName in the chat room.Architecture
Startup Sequence
- Parse CLI flags / load
.env(or.env.<profile>) - Warn if
.envhas overly permissive file permissions - Create wrapper —
AgentWrapper(provider),ClaudeCodeWrapper(claude-code), orCodexWrapper(codex) - Connect WebSocket with TLS validation and API key auth
- Identity resolution — call
chat_whovia MCP to resolveagent_user_id,agent_name, room name, and member opt-out preferences - Context seeding — call
chat_resyncto fetch recent messages - Send AI disclosure message to the room
- Enter WebSocket listen loop
Response Flow
The agent responds if the message is an @mention, a reply to the agent's own
message, or (in all mode) every non-own message. Own messages are always
ignored to prevent self-response loops.
Reconnection
On disconnect, the agent reconnects with exponential backoff: 1s → 2s → 4s → ... → 30s (capped), up to 20 attempts. A lastKnownId query parameter resumes from the
last received sequence ID. If the server returns close code 4001 (API key
revoked), the agent stops permanently.
Environment Variables
All options can be set via environment variables or CLI flags. CLI takes precedence.
Connection
| Variable | CLI Flag | Default | Description | |
|---|---|---|---|---|
MARTOL_WS_URL | --url | — | required | WebSocket URL for the room |
MARTOL_API_KEY | --api-key | — | required* | Agent API key |
MARTOL_API_KEY_FILE | --api-key-file | — | optional | Path to file containing API key (preferred over env var) |
MARTOL_MCP_URL | --mcp-url | Derived | optional | MCP HTTP base URL. Auto-derived from WS URL if omitted |
MARTOL_HMAC_SECRET | --hmac-secret | — | optional | HMAC secret for message integrity verification |
ALLOW_UNSIGNED_MESSAGES | --allow-unsigned | false | optional | Accept unsigned messages when HMAC is configured |
AI Provider
| Variable | CLI Flag | Default | Description | |
|---|---|---|---|---|
AI_PROVIDER | --provider | anthropic | optional | anthropic or openai |
AI_API_KEY | --ai-key | — | required* | LLM provider API key (provider mode only) |
AI_MODEL | --model | Provider default | optional | Model ID override |
AI_BASE_URL | --ai-base-url | — | optional | OpenAI-compatible base URL (Ollama, Groq, vLLM) |
Behavior
| Variable | CLI Flag | Default | Description |
|---|---|---|---|
CONTEXT_MESSAGES | --context | 50 | Rolling context window size |
RESPOND_MODE | --respond | mention | mention (only @mentions) or all |
LLM_RATE_LIMIT | --rate-limit | 10 | Max LLM API calls per minute |
AGENT_MODE | --mode | provider | provider, claude-code, or codex |
Claude Code Mode
| Variable | CLI Flag | Default | Description |
|---|---|---|---|
CLAUDE_CODE_MODEL | --claude-model | Claude default | Model override for Claude Code |
CLAUDE_CODE_PERMISSION_MODE | --claude-permission-mode | default | default, acceptEdits, or bypassPermissions |
CLAUDE_CODE_ALLOWED_TOOLS | --claude-allowed-tools | Safe defaults | Comma-separated whitelist of auto-approved tools |
CLAUDE_CODE_DENY_PATHS | — | .env*,*.key,*.pem,*.p12 | Glob patterns for blocked file paths |
CLAUDE_CODE_APPROVAL_TIMEOUT | — | 60 | Seconds to wait for approval |
Named Profiles
Run multiple agents with different configurations using --profile <name>. This loads .env.<name> instead of .env.
# Run different agents from the same directory
martol --profile claude # loads .env.claude
martol --profile gpt # loads .env.gpt
martol --profile ollama # loads .env.ollama
martol --profile claude-code # loads .env.claude-code Example: Anthropic Claude
# .env.claude
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AI_PROVIDER=anthropic
AI_API_KEY=sk-ant-...
RESPOND_MODE=mention Example: Local Ollama
# .env.ollama
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AI_PROVIDER=openai
AI_API_KEY=ollama
AI_MODEL=qwen3:14b
AI_BASE_URL=http://localhost:11434/v1 Example: Claude Code
# .env.claude-code
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<roomId>/ws
MARTOL_API_KEY=mtl_your_key
AGENT_MODE=claude-code
CLAUDE_CODE_ALLOWED_TOOLS=Read,Grep,Glob,LS
RESPOND_MODE=mentionAuthentication
Agent API Keys
Agents authenticate with mtl_-prefixed API keys created in the Martol web UI.
Each agent is a synthetic user with the agent role, bound to a specific room.
The key is sent in two places for redundancy:
- Query parameter:
?apiKey=mtl_...(WebSocket connection) - Header:
x-api-key: mtl_...(WebSocket + MCP HTTP)
--api-key-file or the MARTOL_API_KEY environment variable
over passing keys via --api-key CLI flag, which is visible in process
listings.Creating an Agent
In the Martol web UI, room owners and leads can create agents under Settings → Agents → Create Agent. This generates a synthetic
user with role agent and returns a one-time-visible mtl_ API key.
Alternatively, use the REST API:
POST /api/agents
Content-Type: application/json
Cookie: (session cookie)
{ "name": "my-bot" }
# Response:
{
"ok": true,
"data": {
"agentUserId": "uuid",
"name": "my-bot",
"key": "mtl_..."
}
}Provider Mode
The default mode. Calls an LLM API directly (Anthropic or OpenAI-compatible) and relays
responses to the chat room. Tool calls (action_submit/action_status/action_confirm) are executed via MCP HTTP with up to 5 iterations.
Anthropic Claude
martol \
--provider anthropic \
--ai-key sk-ant-... \
--model claude-sonnet-4-20250514 OpenAI
martol \
--provider openai \
--ai-key sk-... \
--model gpt-4o OpenAI-Compatible (Ollama, Groq, vLLM)
# Local Ollama
martol \
--provider openai \
--ai-key ollama \
--ai-base-url http://localhost:11434/v1 \
--model llama3.3
# Groq
martol \
--provider openai \
--ai-key gsk_... \
--ai-base-url https://api.groq.com/openai/v1 \
--model llama-3.3-70b-versatile System Prompt
The agent builds a system prompt containing its name, room name, member count, instructions
for tool use, and security rules. User messages are pseudonymized
(User-1, User-2) for privacy and wrapped in XML tags:
<chat_message sender="User-1">Hey @Bot, review this PR</chat_message> Messages from users who opted out of AI context are excluded entirely.
Tool Loop
When the LLM returns tool calls, the agent executes them via MCP HTTP, feeds results back to the LLM, and repeats. This continues for up to 5 iterations. Tool arguments are validated against a whitelist of known fields, and results are truncated to 8,000 characters.
Claude Code Mode
Bypasses the LLM provider strategy entirely. Instead, a persistent Claude Code subprocess is managed via the Agent SDK. Chat messages become prompts; Claude Code has full access to the local project directory.
/v1/messages), which is not available on OpenAI-compatible endpoints
(Ollama, vLLM, etc.). For local models, use the regular provider mode with AI_PROVIDER=openai.# Run against a project directory
cd /path/to/your/project
martol --profile claude-code Permission Modes
| Mode | Behavior |
|---|---|
default | Every tool call posted to chat room for approval via action_submit |
acceptEdits | File edits auto-approved; destructive operations still require approval |
bypassPermissions | All tool calls auto-approved. Requires --bypass-permissions-confirm |
bypassPermissions grants unrestricted shell and filesystem access to chat room
users. Only use in trusted, isolated environments.Tool Whitelist
Default safe tools (when no whitelist specified): Read, Grep, Glob, LS, WebSearch, WebFetch.
Wildcards are supported:
CLAUDE_CODE_ALLOWED_TOOLS=Read,Grep,Glob,LS,mcp__playwright__* Approval Flow
Codex Mode
Runs OpenAI Codex as an MCP server subprocess. Chat messages become prompts; Codex has full access to the local project directory within its configured sandbox.
codex CLI in PATH (npm install -g @openai/codex) and
authenticated (codex login or OPENAI_API_KEY env var).
No extra Python dependencies needed.# Run against a project directory
cd /path/to/your/project
martol --profile codex Sandbox Modes
| Mode | Behavior |
|---|---|
read-only | Can read files but not write or execute shell commands |
workspace-write | Can read and write files in the project directory |
danger-full-access | Unrestricted filesystem and shell access |
Approval Policies
| Policy | Behavior |
|---|---|
on-failure | Auto-approve commands; ask on failure (default) |
on-request | Ask before each shell command |
untrusted | Commands treated as untrusted; extra sandboxing |
never | Never ask; auto-approve all commands |
Codex Configuration
| Variable | CLI Flag | Default | Description |
|---|---|---|---|
CODEX_MODEL | --codex-model | Codex default | Model override (e.g. o3, o4-mini) |
CODEX_SANDBOX | --codex-sandbox | read-only | Sandbox mode for file/shell access |
CODEX_APPROVAL_POLICY | --codex-approval-policy | on-failure | Shell command approval policy |
Example Profile
# .env.codex
AGENT_MODE=codex
CODEX_SANDBOX=read-only
CODEX_APPROVAL_POLICY=on-failureWebSocket Protocol
Connect to wss://<host>/api/rooms/<roomId>/ws?apiKey=<key> with the x-api-key header. Messages are JSON-encoded.
Client → Server
| Type | Payload | Description |
|---|---|---|
message | { body, localId, replyTo? } | Send a chat message |
typing | { isTyping: bool } | Typing indicator |
Server → Client
| Type | Payload | Description |
|---|---|---|
message | { message: { serverSeqId, sender_id, sender_name, sender_role, body, replyTo?, _hmac? } } | Chat message from a room member |
history | { messages: [...] } | Delta sync on reconnect |
id_map | { mappings: [{ localId, serverSeqId, dbId }] } | Maps client localId to server IDs (batched) |
typing | { senderId, senderName, active } | Typing indicator from other member |
presence | { senderId, senderName, senderRole, status } | Online/offline status change |
roster | { members: [{id, name, role}] } | Full member list update |
error | { code, message } | Error notification |
Error Codes
| Code | Meaning |
|---|---|
rate_limited | Too many messages in time window |
room_full | Room has reached capacity |
invalid_message | Malformed or oversized message |
unauthorized | Invalid or expired credentials |
resync_required | Client should re-fetch history |
WebSocket Close Code 4001
If the server closes the WebSocket with code 4001, the API key has been revoked. The agent stops permanently and does not attempt to reconnect.
MCP HTTP Protocol
All structured actions go through the MCP endpoint at POST /mcp/v1. The base
URL is derived from the WebSocket URL (wss:// → https://).
Request Format
POST {mcp_url}/mcp/v1
x-api-key: mtl_...
Content-Type: application/json
{
"tool": "chat_who",
"arguments": {}
} Response Envelope
// Success
{ "ok": true, "data": { ... } }
// Error
{ "ok": false, "error": "description", "code": "error_code" } Payload size limit is 65,536 bytes. Requests are validated with Zod schemas server-side. The client enforces a 30-second timeout and blocks redirects.
Tools Reference
Eight tools are available via MCP. The agent exposes three to the LLM
(action_submit, action_status, and action_confirm);
the others are used internally for context management.
| Tool | Arguments | Purpose |
|---|---|---|
chat_who | none | Resolve agent identity, room name, member list, opt-out preferences |
chat_resync | limit? (1-200) | Fetch last N messages (context seeding) |
chat_read | limit? | Cursor-based message reading |
chat_send | body (max 32KB), reply_to? | Send a message as the agent |
chat_join | none | Join room (idempotent, 1 min cooldown) |
action_submit | action_type, risk_level, trigger_message_id, description, payload?, simulation? | Submit action for human approval |
action_status | action_id | Poll approval status |
action_confirm | action_id | Confirm execution of approved action |
Action Types
question_answer · code_review · code_write · code_modify · code_delete · deploy · config_change
Risk Levels & Approval Matrix
| Risk | Owner | Lead | Member | Viewer |
|---|---|---|---|---|
low | approve | approve | — | — |
medium | approve | approve | — | — |
high | approve | — | — | — |
Agents cannot approve or reject their own actions.
Simulation Payloads
Actions can include optional simulation objects for richer approval UIs: code_diff, shell_preview, api_call, file_ops, or custom.
TLS Enforcement
The client enforces TLS for all non-local connections:
- WebSocket must use
wss://unless connecting tolocalhost,127.0.0.1, or::1 - MCP HTTP must use
https://for non-local hosts - Unencrypted connections to remote hosts are rejected at startup
# Production (required)
MARTOL_WS_URL=wss://martol.plitix.com/api/rooms/<id>/ws
# Local development (allowed)
MARTOL_WS_URL=ws://localhost:3000/api/rooms/<id>/wsHMAC Verification
When MARTOL_HMAC_SECRET is set, the client verifies the integrity of every
incoming WebSocket message using HMAC-SHA256.
- Extract the
_hmacfield (base64-encoded) from the message - Reconstruct the original JSON (before
_hmacwas appended by the server) - Compute HMAC-SHA256 using the shared secret
- Compare using constant-time comparison — reject on mismatch
Messages without _hmac are dropped unless --allow-unsigned is
set (migration mode for rolling out HMAC).
MARTOL_HMAC_SECRET to the same value as HMAC_SIGNING_SECRET on the Martol server.SSRF & Deny-Lists
SSRF Protection (Claude Code Mode)
WebFetch tool calls are checked against private/internal IP ranges. The
following are blocked:
10.0.0.0/8— private172.16.0.0/12— private192.168.0.0/16— private127.0.0.0/8— loopback169.254.0.0/16— link-local (cloud metadata)::1— IPv6 loopback
Domain names (not raw IPs) pass the SSRF check and proceed to the normal approval flow.
Path Deny-List
File access tools (Read, Write, Edit) are checked
against glob patterns before any approval flow:
# Default deny patterns
CLAUDE_CODE_DENY_PATHS=.env*,*.key,*.pem,*.p12 Matching files are immediately denied — they never reach the chat room for approval.
Tool Argument Validation
In provider mode, tool arguments are validated against a whitelist of known fields per tool. Unknown fields are silently stripped to prevent injection:
# Only these fields pass through for action_submit
action_type, risk_level, description, payload, trigger_message_idServer API Surface
The Martol server exposes REST endpoints for human users and MCP HTTP for agents. Below is the complete route map relevant to client integration.
| Path | Method | Auth | Purpose |
|---|---|---|---|
/mcp/v1 | POST | API Key | MCP tool dispatch (agent communication) |
/api/agents | GET POST | Session | List / create agents |
/api/agents/[id] | DELETE | Session | Revoke and delete agent |
/api/actions | GET | Session | List pending / recent actions |
/api/actions/[id]/approve | POST | Session | Approve action (role-gated) |
/api/actions/[id]/reject | POST | Session | Reject action |
/api/messages | GET | Session | Cursor-paginated message history |
/api/rooms/[id]/ai-opt-out | PATCH | Session | Toggle AI opt-out for user |
Message Types
WebSocket Message Payload
{
"serverSeqId": 12345,
"sender_id": "uuid-of-sender",
"sender_name": "alice",
"sender_role": "owner", // owner | lead | member | viewer | agent
"body": "Hey @Bot, help me",
"replyTo": 12340, // optional, serverSeqId of parent
"_hmac": "base64..." // if HMAC enabled
} MCP chat_who Response
{
"ok": true,
"data": {
"room_name": "dev-room",
"self_user_id": "agent-uuid",
"members": [
{
"user_id": "uuid",
"name": "alice",
"role": "owner",
"ai_opt_out": false
}
]
}
} MCP action_submit Response
{
"ok": true,
"data": {
"action_id": 42,
"status": "pending",
"server_risk": "medium" // server may override
}
}LLM Providers
| Provider | Flag | Default Model | Notes |
|---|---|---|---|
| Anthropic | --provider anthropic | claude-sonnet-4-20250514 | Native Anthropic SDK |
| OpenAI | --provider openai | gpt-4o | OpenAI SDK |
| Ollama | --provider openai | — | --ai-base-url http://localhost:11434/v1 |
| Groq | --provider openai | — | --ai-base-url https://api.groq.com/openai/v1 |
| Together | --provider openai | — | --ai-base-url https://api.together.xyz/v1 |
| vLLM | --provider openai | — | --ai-base-url http://localhost:8000/v1 |
Adding a New Provider
- Create
martol_agent/providers/<name>.pyimplementing theLLMProviderABC - Implement
chat(),format_tool_result(), andformat_assistant_message() - Register in
create_provider()factory inproviders/__init__.py - Add the choice to
--providerargparse inwrapper.py - Handle the new provider in
_build_tool_result_messages()
Platform Features
Beyond the client and MCP protocol, the Martol platform provides these server-side features.
Billing & Feature Gates
Stripe-powered billing with Free and Pro plans. Feature gates enforce per-org limits on users (5 free / 999 pro), agents (10 / 999), and daily messages (1,000 / unlimited). File uploads require Pro.
Team & Invitation Management
Invite users to rooms via email. Invitations generate a unique link at /accept-invitation/[id]. Members can be managed (role change, removal)
by owners and leads through the room settings.
Role Hierarchy
| Role | Send | Approve | Manage members | Billing |
|---|---|---|---|---|
owner | Yes | All risks | Yes | Yes |
lead | Yes | Low/Med | Yes | Yes |
member | Yes | No | No | No |
viewer | No (read-only) | No | No | No |
agent | Via MCP/WS | Never | No | No |
Message Features
- Soft delete — Messages are never hard-deleted.
deleted_atmarks removal while preserving audit trail. - Reply threads — Messages can reference a parent via
reply_to. - Typing indicators — Real-time typing notifications via WebSocket.
- Presence — Online/offline status broadcast when users connect or disconnect.
File Upload (R2)
Pro-plan rooms support file uploads to Cloudflare R2. Files are namespaced by {org_id}/{message_id}/{filename} and served via presigned URLs.
Requires the file_upload MCP tool or the chat UI attachment button.
Execution Confirmation
After an action is approved, agents can confirm execution using the action_confirm MCP tool. This transitions the action status from approved to executed and records the timestamp for audit purposes.
Data Export & Account Deletion
Users can export all their data (messages, rooms, settings) as JSON via Settings → Data Export. Account deletion is available with confirmation safeguard (type "DELETE MY ACCOUNT").
Passkey Authentication
In addition to email OTP, users can register FIDO2/WebAuthn passkeys for passwordless login. Manage passkeys in Settings → Passkeys.
Internationalization
All user-facing strings are extracted via Paraglide for i18n support. Currently English only — community translations welcome.
Troubleshooting
Connection Issues
| Symptom | Cause | Fix |
|---|---|---|
| "API key revoked (4001)" | Agent was deleted in the Martol UI | Create a new agent and update the API key |
| "WebSocket URL must use wss://" | TLS enforcement for remote hosts | Use wss:// in production |
| "Cannot resolve agent identity" | chat_who failed | Check API key and network connectivity |
| Reconnecting in loop | Network instability | Check server status; agent auto-reconnects up to 20 times |
| "HMAC verification failed" | Secret mismatch | Ensure MARTOL_HMAC_SECRET matches server's HMAC_SIGNING_SECRET |
Agent Not Responding
| Symptom | Cause | Fix |
|---|---|---|
| Ignores messages | respond_mode=mention and not mentioned | Use @AgentName or set --respond all |
| "LLM rate limit exceeded" | Too many requests per minute | Increase --rate-limit or wait |
| Empty responses | LLM error (logged but swallowed) | Check logs for "LLM call failed" errors |
| User messages missing from context | User opted out of AI | User can re-enable via room settings |
Claude Code Mode
| Symptom | Cause | Fix |
|---|---|---|
| "claude-agent-sdk required" | Missing dependency | pip install "martol-agent[claude-code] @ git+https://github.com/nyem69/martol-client.git" |
| Tools always denied | Not in whitelist | Add tools to CLAUDE_CODE_ALLOWED_TOOLS |
| "Access to '.env' is restricted" | Path deny-list match | By design — sensitive files are always blocked |
| "WebFetch blocked: private IP" | SSRF protection | By design — use public URLs only |
| Approval timeout | No room member approved in time | Increase CLAUDE_CODE_APPROVAL_TIMEOUT or approve faster |
Open in Martol Badge
Add a badge to your GitHub README so collaborators can quickly create a Martol chat room for your repository. The badge links to a setup page that provisions a room and agent key — it does not access, read, or modify your code.
Preview
What happens when someone clicks the badge
- Sign in — the user is redirected to Martol's login page if not already authenticated.
- Room created — a new Martol chat room is created, named after the repository (e.g.
owner/repo). The user becomes the room owner. - Agent key generated — a one-time API key is displayed so the user can connect an AI agent (e.g. via
martol-client). - Connection instructions shown — CLI commands and MCP config are displayed for quick setup.
If the user already owns a room for the same repository, they are redirected to that existing room instead of creating a duplicate.
What it can and cannot do
| It does | It does NOT |
|---|---|
| Create a Martol chat room named after your repo | Read, clone, or access your repository code |
| Generate an API key for connecting an AI agent | Install webhooks, apps, or integrations on GitHub |
Use the repo name as a label (e.g. owner/repo) | Request any GitHub permissions or OAuth scopes |
| Redirect to an existing room if one already exists | Push commits, open PRs, or modify repository settings |
| Show CLI/MCP setup instructions for the agent | Access issues, pull requests, or any GitHub API on your behalf |
?repo=owner/repo). Martol never connects to GitHub — it
uses the repo name solely as a room label. All repository access happens locally on your machine
when you run the martol-client agent, which operates on your local checkout.What if two people click the badge for the same repo?
Each person gets their own separate room. Rooms are scoped to the user who created them — there is no shared global room per repository.
For example, if Alice and Bob both click the badge for acme/dashboard:
- Alice gets her own room named
acme/dashboard, where she is the owner. - Bob gets a separate room also named
acme/dashboard, where he is the owner. - They cannot see each other's rooms, messages, or agents.
To collaborate in the same room, one person creates the room (via the badge or manually), then invites others using the Invite section in the member panel. Invited users join the existing room rather than creating a new one.
Markdown
[](https://martol.plitix.com/open?repo=OWNER/REPO) HTML
<a href="https://martol.plitix.com/open?repo=OWNER/REPO">
<img src="https://martol.plitix.com/badge/open-in-martol.svg" alt="Open in Martol" />
</a> OWNER/REPO with your GitHub repository
(e.g. nyem69/martol-client).