Clawdbot vs LitAI: Reading Both Codebases So You Don't Have To#
A feature-by-feature technical comparison based on source code analysis — not marketing, not demos, not GitHub stars.
For weeks we've been telling anyone who'll listen that LitAI is "the workspace for your AI" — a multi-tenant platform where each user gets their own isolated environment, their own AI sessions, their own tools, accessible from a browser as if they were sitting at their own laptop.
Then, over a weekend, the entire internet started talking about Clawdbot — "the computer for your AI." 40k GitHub stars. A Karpathy endorsement. Every tech influencer covering it. Real momentum.
And there's a lot of overlap.
We won't pretend that didn't land. We spent a full day exploring whether we should make Clawdbot a backend for LitAI — embrace and extend, ride the wave. We drafted strategy documents. We started writing integration code. We went to sleep thinking we might be pivoting.
Then we woke up and asked a harder question: is this a strategy, or is this a reaction?
So we did what engineers are supposed to do. We slowed down. We cloned the repo. We read the source code. And we compared it, feature by feature, against what we've built.
What follows is that comparison — where they're genuinely better, where we are, and what the differences mean.
The Methodology#
Both platforms were evaluated on their source code. No installations, no executions — read-only analysis of both codebases. For each overlap area, we identify what each platform actually implements and render a verdict.
The goal is intellectual honesty, not advocacy. We call it like it is — including the areas where Clawdbot has us beat.
1. Chat Protocol and Streaming#
Clawdbot uses a single WebSocket with a typed JSON RPC protocol. From src/gateway/protocol/schema/frames.ts, three frame types:
{type: "req", id, method, params}— client request{type: "res", id, ok, payload|error}— server response{type: "event", event, payload, seq?, stateVersion?}— server push
All typed with TypeBox schemas. The stateVersion field on events enables clean reconnection — a disconnected client can resume from where it left off. 85 RPC methods in server-methods-list.ts covering everything from chat.send and sessions.list to exec.approval.request and cron.list. Native companion apps on iOS, Android, and macOS.
LitAI uses a FastAPI REST API with Server-Sent Events for streaming. A chat message streams like this:
POST /sessions/{session_id}/stream
→ data: {"type": "content", "content": "Here's what I found..."}
→ data: {"type": "content", "content": " in the codebase."}
→ data: {"type": "tool_use", "tool": "Read", "input": {"file_path": "/home/user/app.py"}}
→ data: {"type": "done", "message_id": "abc-123"}
Keepalive heartbeats every 15 seconds during idle. A separate Socket.IO layer pushes real-time events to the Angular frontend. Additional WebSocket endpoints for telemetry, heartbeat monitoring, file watching, and terminal access. Over 100 REST endpoints covering sessions, agents, backends, conversations, organizations, system prompt management (with version history, diff, and restore), file operations, widgets, and stream state management with reconnection.
Verdict: Clawdbot wins. A single WebSocket with typed frames, sequence numbers, and state versioning is a cleaner design than our REST + SSE + Socket.IO approach. Their protocol was designed as a cohesive system; ours grew organically. If we were designing our transport layer from scratch today, we'd design something closer to theirs.
Clawdbot wins.
2. Session Management#
Clawdbot ties sessions to channels (Discord, Slack, WhatsApp) or direct gateway connections. sessions.list, sessions.get, sessions.delete RPC methods. Session state is maintained in-memory with optional persistence. Isolation is per-channel, not per-user in the multi-tenant sense.
LitAI sessions belong to agents and are scoped to users. Each session is a dataclass with agent binding, message history, and metadata:
@dataclass
class Session:
id: str
name: Optional[str]
backends: List[str] # Which AI backends this session can use
agent_id: str = "default" # Agent this session belongs to
messages: List[Message] = field(default_factory=list)
metadata: Dict[str, Any] = field(default_factory=dict)
Session files live in the user's home directory (~/.config/lit/agents/{agent_id}/sessions/{id}.json), read and written via SSH as that user. Batch operations minimize overhead — ssh_read_files_batch() reads all session files in a single SSH call using base64-encoded output with custom delimiters. Each session gets its own Claude subprocess. Session compaction handles long conversations.
Verdict: LitAI's session model is more capable because it solves a harder problem. Session-agent binding, per-user storage via SSH, per-session subprocess isolation, and compaction are production multi-tenant features. Clawdbot's model is simpler because it's single-user.
LitAI wins.
3. Multi-Model Routing#
Clawdbot is Claude-focused. Model selection is per-session via configuration. There's no pluggable multi-backend routing architecture in the source — supporting other models would require writing a new agent backend.
LitAI has a MessageRouter with a pluggable Backend abstract class. Adding a new AI provider means implementing a handful of methods:
# backends/claude_cli.py — spawns Claude CLI subprocess per user via SSH
# backends/ollama.py — Ollama SDK for local models
# backends/gemini.py — Google Gen AI SDK
# backends/codex.py — OpenAI Codex integration
A session declares which backends it can use (backends: List[str]). A message can override the backend at send time. The API exposes health checks, usage reporting, and dynamic configuration per backend — GET /backends, GET /backends/{id}/status, GET /backends/{id}/usage, POST /backends/{id}/configure.
Verdict: This was designed for multi-model from day one. Four backends with a clean abstraction vs Claude-only.
That said, with 40k stars and an active community, Clawdbot's "Claude only" limitation could evaporate fast. Adding a single model backend is a community contribution; designing multi-model routing — where sessions declare backends, messages override at send time, and health checks span providers — is a core architecture decision. We'll see which happens.
LitAI wins clearly — for now.
4. Tool Ecosystem#
Clawdbot has a mature tool ecosystem: a skills platform with community registry (ClawdHub), browser automation via Chrome DevTools Protocol, an exec approval workflow for shell commands, and native integrations across 12+ messaging channels (covered separately in section 6). The skills marketplace is a real differentiator — community-contributed tools that extend the platform without core changes.
LitAI has an MCP client with full lifecycle management (start, initialize, discover, execute, shutdown), tool integration into the AI conversation loop, file operations via commander endpoints (list, read, write, mkdir, delete, rename, upload, download), and a Jobs widget with a cron expression editor. Claude also operates the underlying OS as the authenticated user — meaning any scheduling or system administration task is achievable through conversation, not limited to what the platform explicitly exposes.
Verdict: Clawdbot wins. The skills marketplace, browser automation, and community ecosystem give them a broader tool surface. Our MCP client and commander endpoints are solid, but we don't have a community registry or browser control.
Clawdbot wins.
5. Execution Security#
This is where the architectures diverge most sharply.
Clawdbot has three layers of security. From src/infra/exec-approvals.ts:
type ExecSecurity = "deny" | "allowlist" | "full" // "full" = unrestricted shell
type ExecAsk = "off" | "on-miss" | "always"
An approval workflow lets users consent to commands. Docker sandboxing (src/agents/sandbox/config.ts) is thorough when enabled — read-only root, --cap-drop=ALL, no network, tmpfs, memory/CPU/PID limits, seccomp, AppArmor. But the mode setting reveals the default posture:
The sandbox defaults to off. Mode "non-main" only sandboxes channel/group sessions — the main session the user interacts with runs unsandboxed unless explicitly set to "all". The security is opt-in.
LitAI doesn't have a sandbox toggle because isolation isn't a feature — it's the architecture. From services/ssh_exec.py:
async def ssh_exec(
username: str,
command: list[str],
hostname: str = 'localhost',
...
) -> Tuple[int, bytes, bytes]:
# Check if we can bypass SSH for local user
if _can_bypass_ssh(username, hostname):
return await _local_exec(command, working_dir, env, timeout)
# Need SSH - verify trust is established
key_path = get_ssh_key_path(username, hostname)
if not key_path.exists():
raise TrustNotEstablishedError(...)
Every command runs as the authenticated OS user via SSH. There is no "off" mode. The kernel enforces it. Trust must be explicitly established (SSH key at /etc/lit/keys/{username}-{hostname}) before any cross-user execution is possible. User A cannot read User B's files, not because a configuration flag prevents it, but because Unix permissions do.
This distinction matters most for prompt injection. Clawdbot's defense (src/security/external-content.ts) uses 13 regex patterns to detect injection phrases and wraps external content in LLM-instruction boundaries:
The system prompt instructs the model to ignore commands embedded in external content. This is a suggestion to the model, not a security boundary. If the model decides to follow an injected instruction, nothing prevents it. Reddit reports of successful prompt injection attacks against Clawdbot are consistent with this design. LitAI's approach means a prompt injection can only do what the authenticated user can already do. The blast radius is bounded by Unix permissions, not by whether the LLM obeys a warning.
Verdict: Different design philosophies with different consequences. Clawdbot's security is opt-in — powerful when configured, absent by default. LitAI's security is the architecture — you can't turn it off because there's nothing to turn off. For a single user on their own machine, Clawdbot's approach is fine. For multiple users sharing infrastructure, only one of these models is viable.
LitAI wins.
6. Channels and External Input#
Clawdbot has 12+ messaging channel integrations: WhatsApp, Telegram, Slack, Discord, SMS, and more. Each channel maps to a session. Messages arrive from external platforms, get processed by the AI, and responses are sent back. This is a major ecosystem advantage — users can interact with their AI from wherever they already are.
LitAI supports web and mobile (via responsive web) input by design. But we've also built a heartbeat system — a stimulus-response architecture where agents wake on a configurable interval and check external sources:
# stimuli/mattermost_check.py — Monitor a Mattermost channel
# stimuli/jira_tickets.py — Watch for Jira ticket changes
# stimuli/git_commits.py — Monitor git repositories
# stimuli/inbox_messages.py — Check agent inbox for messages
Each stimulus is a plugin with a typed parameter schema (Pydantic models), configurable per-agent. The HeartbeatService orchestrates them — it manages a MattermostManager, JiraManager, FileSystemMonitor, and a StimulusRegistry for dynamically loaded plugins. When a stimulus fires, the agent wakes, processes the input through Claude, and can respond via Mattermost or take action.
This is experimental. We haven't shipped it broadly because the same question that haunts Clawdbot's channel integrations haunts ours: when external content enters the system through a messaging channel, the prompt injection surface expands. We'd rather get the security story right than ship 12 channels fast.
Verdict: 12+ channels shipping today vs our experimental stimulus system. Not close.
Clawdbot wins.
7. Authentication and User Management#
Clawdbot supports auth tokens for WebSocket connections. The connect handshake includes device identity and auth. But it's a single-operator model — the person running the gateway is the user. The security audit tool (src/security/audit.ts, 900+ lines) checks auth configuration, but the platform itself doesn't provide multi-tenant auth, SSO, RBAC, or team scoping.
LitAI has Keycloak JWT validation baked in. The request flow:
Browser → Keycloak SSO → JWT token → LitAI API
↓
Extract username, roles, email
↓
SSH exec as that Unix user
↓
Command runs in user's home directory
with user's file permissions
Every API request carries a JWT. Every command executes as the authenticated OS user. Agent creation, session storage, memory directories — all scoped per user. Organizations with CRUD and sharing between users. Audit trails as a byproduct: SSH logs, per-user transcripts, team-scoped resources. Not optional features — inevitable consequences of the architecture.
Verdict: Not close. Keycloak SSO, RBAC, per-user OS-level isolation, organization management — these are enterprise features that would require a fundamental redesign of Clawdbot to add. Clawdbot wasn't designed for this and doesn't pretend to be.
LitAI wins decisively.
8. Memory and Persistence#
Clawdbot stores session history in-memory with optional persistence. It relies on Claude's conversation context and the user's filesystem. No structured memory system beyond conversation history.
LitAI has a ConversationStore with typed message routing between participants:
class MessageType(Enum):
AGENT_TO_AGENT = "agent_to_agent"
AGENT_TO_USER = "agent_to_user"
USER_TO_AGENT = "user_to_agent"
Participant indexing, message search, statistics, and automatic cleanup (7-day retention for inactive conversations). On top of that, per-user, per-model memory directories (~/.memory/models/claude/) with AI-curated content — the system prompt tells the AI to read its own memory at session start and update it throughout the conversation. We've been thinking about AI memory since mid-2025 — LitAI's memory architecture grew from that work. A context-relaying architecture means each session starts with curated context from previous interactions, enabling persistent intelligence across sessions.
Verdict: Per-user scoping, per-model directories, AI curation, and context-relaying vs filesystem and conversation history.
LitAI wins.
The Gaps — What They Have, What We Have#
What Clawdbot Has That We Don't#
- Wizard onboarding protocol — Structured step-by-step onboarding over WebSocket. A good pattern we plan to adopt.
- Native mobile/desktop apps — iOS, Android, macOS. We're browser-only.
- Skills marketplace — ClawdHub. We have no community ecosystem.
- Browser automation — Chrome DevTools Protocol. We have no browser control.
- Community momentum — 40k stars. (join our Discord!)
What LitAI Has That Clawdbot Doesn't#
- Multi-tenant isolation via the OS — SSH-as-user execution, per-user Unix accounts, kernel-enforced permissions. This is the architecture, not a toggle.
- Keycloak SSO with RBAC — Enterprise auth from day one.
- Multi-backend routing — Four production backends with a pluggable abstraction.
- Per-user memory architecture — Scoped, curated, persistent across sessions.
- Organization management — Teams, sharing, scoped resources.
- Agent orchestration — Multi-agent registry, agent-to-agent messaging, heartbeat management, delegation tracking. The foundation for multi-agent collaboration.
- 100+ REST API endpoints — A full platform API, not just a chat protocol.
Summary#
| Capability | Clawdbot | LitAI | Edge |
|---|---|---|---|
| Chat protocol | WebSocket JSON RPC, typed frames, state versioning | REST + SSE + Socket.IO | Clawdbot |
| Session management | Per-channel, single-user | Per-user, per-agent, SSH-isolated, compactable | LitAI |
| Multi-model routing | Claude only | Claude, Ollama, Gemini, Codex (pluggable) | LitAI |
| Tool ecosystem | Skills platform, browser, ClawdHub registry | MCP client, file commander, jobs UI | Clawdbot |
| Execution security | Docker sandbox (defaults off), exec approvals | SSH-as-user (always on, kernel-enforced) | LitAI |
| Channels | 12+ messaging platforms, shipping | Stimulus/heartbeat system, experimental | Clawdbot |
| Authentication | Token-based, single operator | Keycloak SSO, RBAC, per-user Unix accounts | LitAI |
| Memory / persistence | Filesystem + conversation history | Per-user/model memory, context-relaying, conversation store | LitAI |
| Onboarding UX | Wizard protocol (structured, typed) | None | Clawdbot |
| Native apps | iOS, Android, macOS | Browser only | Clawdbot |
| Community | 40k stars, ClawdHub skills registry | Small team | Clawdbot |
| Multi-tenant | Not designed for it | Architectural foundation | LitAI |
| Agent orchestration | Single-agent | Multi-agent registry, messaging, heartbeat | LitAI |
LitAI: 7 Clawdbot: 7
What We're Taking Away#
Clawdbot is well-engineered. Hats off. They've built an excellent single-user AI development environment with impressive ecosystem reach.
We hold that the cost of software is going to zero. This comparison is evidence — two teams independently built overlapping feature sets in just a year.
LitAI wasn't designed as an AI chat platform. It was built over 11 years as deep learning infrastructure — multi-tenant by necessity, because real data science means multiple users, sensitive datasets, and shared compute. The architectural properties that differentiate us — OS isolation, enterprise auth, per-user scoping — aren't features we added for this comparison. They're consequences of the problem we were already solving. We discovered in 2025 that those properties have value far beyond deep learning.
We're opening LitAI to early adopters. If any of this resonated — come say hi.