Skip to content

Audit Log & Transcripts#

Every conversation, every tool call, every token in and out of every LLM — recorded, timestamped, and queryable. LIT doesn't summarize what happened; it keeps the full record.

Full LLM Transcripts#

Every message sent to an LLM and every response received is stored verbatim. Not a summary, not a digest — the raw transcript, with:

  • Timestamps on every message (millisecond precision)
  • Model identity — which model was used for each response
  • Token counts — prompt tokens, completion tokens, total
  • Measured response time — wall-clock latency for every LLM call
  • Tool call records — every tool invoked, with inputs and outputs

This creates a complete audit trail of every AI decision in your system.

Why This Matters#

Most AI tools are black boxes. You see inputs and outputs, but not the reasoning, the tool calls, or the intermediate steps. LIT is the opposite.

When a model makes a surprising recommendation, you can trace exactly what context it had, what tools it called, what it saw, and what it decided. Debugging AI behavior is the same as debugging software — you look at the logs.

For regulated industries (finance, healthcare, legal), full transcripts provide the audit trail required for AI-assisted decisions.

Querying Transcripts#

Channel message history is queryable via the Python SDK:

from lit import channels

# Get a channel
ch = channels.get("volatility-model")

# Iterate all messages
for msg in ch.messages():
    print(msg.timestamp, msg.direction, msg.from_id, msg.content)

# Filter by date range
for msg in ch.messages(start="2025-12-01", end="2025-12-31"):
    print(msg.timestamp, msg.content)

# Export a full transcript
ch.messages().export("transcript-dec-2025.json")

Safe Mode Audit#

When agents run in safe mode, every confirmation request and human decision is logged alongside the agent action. You have a complete record of what the agent wanted to do and what the human approved.