Quick Start
Get Station running with your AI provider in 2 minutes.
Prerequisites
-
Docker - Required for:
-
AI Provider - Choose one:
- Claude Max/Pro subscription (recommended - no API billing)
- OpenAI API Key - gpt-5, gpt-5-mini, gpt-4o, etc.
- Google Gemini API Key
1. Install Station
curl -fsSL https://raw.githubusercontent.com/cloudshipai/station/main/install.sh | bash
2. Initialize with Your AI Provider
Choose your preferred AI provider:
Claude Max/Pro Subscription (Recommended)
Use your existing Claude Max or Claude Pro subscription - no API billing required.
# Initialize with Anthropic
stn init --provider anthropic --ship
# Authenticate with your Claude subscription
stn auth anthropic loginThis opens your browser to authorize Station. After authorizing, paste the code and select your model:
β
Successfully authenticated with Anthropic!
You're using your Claude Max/Pro subscription.
Station will automatically refresh tokens as needed.
Select a model for your Claude Max/Pro subscription:
* [1] Claude Opus 4.5
Most capable model - best for complex tasks
[2] Claude Opus 4
Previous Opus version
[3] Claude Sonnet 4.5
Balanced performance and speed
[4] Claude Sonnet 4
Fast and efficient
[5] Claude Haiku 4.5
Fastest model - best for simple tasksOpenAI (API Key)
# Set your API key
export OPENAI_API_KEY="sk-..."
# Initialize (defaults to gpt-5-mini)
stn init --provider openai --shipGoogle Gemini (API Key)
# Set your API key
export GEMINI_API_KEY="..."
# Initialize
stn init --provider gemini --shipWhat is Ship? Ship is an MCP CLI tool by the CloudShip AI team that provides filesystem and development tools for Station agents.
Optional: Git-Backed Workspace
For version-controlled agent configurations, initialize Station in a specific directory:
# Initialize in a git-backed workspace
stn init --provider openai --ship --config ~/my-station-workspace
# Your agents, MCP configs, and variables are now in ~/my-station-workspace
cd ~/my-station-workspace
git init && git add . && git commit -m "Initial Station config"
When connecting your MCP client, point to your workspace:
# Claude Code CLI with custom workspace
claude mcp add station -e OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 --scope user -- stn stdio --config ~/my-station-workspace
See GitOps Workflow for team collaboration patterns.
3. Start Jaeger (Tracing)
Start the Jaeger tracing backend for observability:
stn jaeger up
This starts Jaeger UI at http://localhost:16686 for viewing agent execution traces.
4. Connect Your MCP Client
Claude Code CLI (Recommended)
Use the claude mcp add command:
claude mcp add station -e OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 --scope user -- stn stdioVerify itβs added:
claude mcp listOpenCode
Add to opencode.jsonc:
{
"mcp": {
"station": {
"enabled": true,
"type": "local",
"command": ["stn", "stdio"],
"environment": {
"OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:4318"
}
}
}
}Claude Desktop
Edit your config file:
| OS | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Linux | ~/.config/Claude/claude_desktop_config.json |
{
"mcpServers": {
"station": {
"command": "stn",
"args": ["stdio"],
"env": {
"OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:4318"
}
}
}
}Cursor
Add to .cursor/mcp.json in your project (or ~/.cursor/mcp.json for global):
{
"mcpServers": {
"station": {
"command": "stn",
"args": ["stdio"],
"env": {
"OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:4318"
}
}
}
}5. Start Using Station
Restart your editor. Station provides:
- Web UI at http://localhost:8585 for configuration
- Jaeger UI at http://localhost:16686 for traces
- 41 MCP tools available in your AI assistant
Try your first command:
"Show me all Station MCP tools available"
Interactive Onboarding Guide (Optional)
Want a guided tour? Copy this prompt into your AI assistant for a 3-5 minute hands-on tutorial:
Copy the Onboarding Prompt
You are my Station onboarding guide. Walk me through an interactive hands-on tutorial.
RULES:
1. Create a todo list to track progress through each section
2. At each section, STOP and let me engage before continuing
3. Use Station MCP tools to demonstrate - don't just explain, DO IT
4. Keep it fun and celebrate wins!
THE JOURNEY:
## 1. Hello World Agent
- Create a "hello-world" agent that greets users and tells a joke
- Call the agent and show the result
- Explain what happened behind the scenes
[STOP for me to try it]
## 2. Faker Tools & MCP Templates
- Explain Faker tools (AI-generated mock data for safe development)
- Note: Real MCP tools are added via Station UI or template.json
- Explain MCP templates - they keep credentials safe when deploying
- Create a "prometheus-metrics" faker for realistic metrics data
[STOP to see the faker]
## 3. DevOps Investigation Agent
- Create a "metrics-investigator" agent using our prometheus faker
- It should analyze metrics and identify anomalies
- Call it: "Check for performance issues in the last hour"
[STOP to review the investigation]
## 4. Multi-Agent Hierarchy
- Explain agent hierarchies (coordinators delegate to specialists)
- Create an "incident-coordinator" that delegates to:
- metrics-investigator (existing)
- logs-investigator (new - create a logs faker too)
- Show me the hierarchy structure in the .prompt file
- Call coordinator: "Investigate why the API is slow"
[STOP to see delegation in action]
## 5. Inspecting Runs
- Use inspect_run to show detailed execution
- Explain: tool calls, delegations, timing
- Mention Jaeger traces at localhost:16686
[STOP to explore run details]
## 6. Workflow with Human-in-the-Loop
- Create a workflow that:
1. Runs incident-coordinator to investigate
2. Switch on severity:
- Low: auto-remediate
- High: request human_approval
3. After approval: generate incident report
- Make it complex (use switch/parallel), not just sequential
- Start the workflow
[STOP for me to approve/reject]
## 7. Evaluation & Reporting
- Run evals on our runs with evaluate_benchmark
- Generate a performance report for our incident team
- Explain what the scores mean
[STOP to review the report]
## 8. Grand Finale
- Direct me to http://localhost:8585 (Station UI)
- Quick tour: Agents, MCP servers, Runs, Workflows
- Celebrate - we built a production-ready incident response system!
## 9. Want More? (Optional)
If I want to continue, briefly explain these advanced features:
- **Schedules**: Cron-based agent scheduling (declarative in .prompt files)
- **Sandboxes**: Isolated Python/Node/Bash code execution for agents
- **Notify Webhooks**: Agents can send alerts to Slack, ntfy, Discord, etc.
- **Bundles**: Package and share your agent teams as portable bundles
- **Deploy**: `stn deploy` to Fly.io, Docker, or Kubernetes
- **Coding Backend**: OpenCode integration for AI-assisted development
- **CloudShip**: Connect to CloudShip for centralized management and team OAuth
Just explain what they do - no need to demo.
Start now with the todo list and Section 1. Make it engaging!What You Get
- 41 MCP Tools - Agent management, execution, evaluation, scheduling
- Web UI - Visual interface for agents, MCP servers, and runs
- Jaeger Traces - Full observability for every agent execution
- GitOps Ready - Version control your agents like code
Next Steps
- Running with Docker (stn up) - Containerized option with bundles
- Create Your First Agent - Build custom agents
- MCP Tools Reference - All 41 available tools
- Architecture Overview - Understand how Station works