Skip to content

Why Pipelet is Claude-Ready

Pipelet ships a first-class Model Context Protocol server. Connect Claude Desktop, your CI agent, a custom Python script, or anything else that speaks MCP, and the model gets a typed tool catalog covering 30 operations across the entire charging stack.

This page explains why we built it that way and what you can do with it. The next page is the 3-minute setup.

The Model Context Protocol is an open standard for connecting LLMs to external tools and data sources. Anthropic published it in 2024; it’s now supported by Claude Desktop, Claude Code, the Anthropic SDK, and a growing list of third-party clients.

Instead of teaching a model how to construct REST calls (which means writing a system prompt the size of an OpenAPI spec), you run an MCP server alongside the model. The server exports tool definitions; the model picks the right one and calls it. The wire format is JSON-RPC over stdio or SSE.

For Pipelet that means:

  • An LLM operating the network never sees raw HTTP. It sees list_stations, start_charging("WALLBOX_001", connector=1, id_tag="RFID_42"), etc.
  • Every tool call is logged, auditable, and replayable.
  • The same backend powers the REST gateway — there is no second data path to maintain.

Pipelet’s MCP server registers 30 tools across five areas:

AreaToolsExamples
Station monitoring6list_stations, get_station_details, get_station_liveness, diagnose_station, get_system_health, get_metrics
Configuration7get_station_config, set_station_config, trigger_station_reset, change_station_availability, start_charging, stop_charging, get_firmware_status
Billing & sessions6get_active_sessions, query_billing_sessions, get_billing_summary, get_dashboard_kpis, get_chart_data, export_billing_csv
Customers & tokens6search_customers, get_customer_details, search_tokens, list_locations, list_evses, get_evse_live
Load management5get_load_status, get_load_groups, get_meter_reading, get_load_log, trigger_rebalance

See the full catalog with descriptions →

  • A network operator’s chat assistant. “Which stations have been disconnected for more than an hour?” → get_station_liveness with a 3600-second threshold. Done.
  • An autonomous load-balancing agent. Wakes up every 5 minutes, calls get_load_status and get_meter_reading, runs trigger_rebalance when groups drift apart.
  • An on-call assistant. A station is faulting overnight. The agent calls diagnose_station, summarizes the connection stats, and drafts a Slack message explaining the issue.
  • A finance close-out script. End of month: get_billing_summary(year, month), export_billing_csv(...), attach to an email, done.
  • A claude.ai-powered onboarding flow. “Add this list of 50 charge points to load group #12, set them all to AC mode, and verify each one boots within 30 seconds.” The model orchestrates configuration, monitoring, and verification — without you writing a single line of glue code.
┌──────────────────┐ MCP/stdio ┌──────────────────┐ HTTP ┌──────────────────┐
│ Claude Desktop │◄────────────────│ pipelet-mcp │───────────►│ cpms-headless │
│ (or any client) │ │ server │ │ (REST gateway) │
└──────────────────┘ └──────────────────┘ └──────────────────┘
┌──────────────────┐
│ ocpp-broker │──► physical chargers
└──────────────────┘

The MCP server is a thin async Python process. It holds no state — every tool call translates into a REST call against the gateway. That means everything you can do via the MCP catalog, you can also do via the REST API, and they share the same auth, the same database, and the same audit log.

  • Stdio transport requires the client to spawn the server process. Claude Desktop does this for you. For a remote setup, run it as an SSE server.
  • Tools return JSON strings. The model parses them into structured data. Large station lists (>1000 entries) will eat tokens — use the filtered tools (get_station_liveness?at_risk_only=true) when the model only needs a subset.
  • No streaming. Tool calls are request/response. For real-time event watching, use the Headless webhooks or the Chargersim SSE stream.
  • MCP is still a young protocol. Tooling around it is improving fast, but expect occasional breaking changes in the spec until ~2027.