Skip to content

← Back to YouLab

Honcho provides message persistence for theory-of-mind (ToM) modeling in YouLab.

Honcho captures all chat messages for long-term analysis:

  • User messages - What students say
  • Agent responses - What the tutor replies
  • Session context - Chat IDs, titles, agent types

This enables future ToM features like:

  • Student behavior modeling
  • Learning pattern analysis
  • Personalized recommendations
┌─────────────────────────────────────────────────────────────┐
│ Chat Flow │
│ │
│ User Message ──► HTTP Service ──► Letta Server │
│ │ │
│ ▼ │
│ HonchoClient │
│ (fire-and-forget) │
│ │ │
│ ▼ │
│ Honcho Service │
│ (message persistence) │
└─────────────────────────────────────────────────────────────┘
ConceptYouLab MappingExample
WorkspaceApplicationyoulab
PeerMessage senderstudent_{user_id}, tutor
SessionChat threadchat_{chat_id}
MessageIndividual messageUser or agent content
Workspace: "youlab"
├── Peer: "student_user123"
│ └── Messages from this student
├── Peer: "student_user456"
│ └── Messages from this student
├── Peer: "tutor"
│ └── All agent responses
└── Session: "chat_abc123"
└── Messages in this chat thread

Location: src/youlab_server/honcho/client.py

from youlab_server.honcho import HonchoClient
client = HonchoClient(
workspace_id="youlab",
api_key=None, # Required for production
environment="demo", # demo, local, or production
)

The Honcho SDK client is lazily initialized on first use:

@property
def client(self) -> Honcho | None:
if self._client is None and not self._initialized:
self._initialized = True
# Initialize Honcho SDK...
return self._client

If initialization fails (network error, invalid credentials), client returns None and persistence is silently skipped.

Persist a user’s message:

await client.persist_user_message(
user_id="user123",
chat_id="chat456",
message="Help me brainstorm essay topics",
chat_title="Essay Brainstorming",
agent_type="tutor",
)

Persist an agent’s response:

await client.persist_agent_message(
user_id="user123", # Which student this was for
chat_id="chat456",
message="Great! Let's explore some topics...",
chat_title="Essay Brainstorming",
agent_type="tutor",
)

Verify Honcho is reachable:

if client.check_connection():
print("Honcho is available")

Query Honcho for insights about a student (theory-of-mind):

from youlab_server.honcho.client import SessionScope
response = await client.query_dialectic(
user_id="user123",
question="What learning style works best for this student?",
session_scope=SessionScope.ALL,
recent_limit=5,
)
if response:
print(response.insight) # "This student prefers hands-on examples..."
ParameterTypeDefaultDescription
user_idstringRequiredStudent identifier
questionstringRequiredNatural language question
session_scopeSessionScopeALLWhich sessions to include
session_idstringNoneSpecific session (reserved)
recent_limitint5Number of recent sessions (reserved)

SessionScope enum:

ValueDescription
ALLAll sessions for this user
RECENTLast N sessions
CURRENTCurrent/active session only
SPECIFICExplicit session ID

Returns: DialecticResponse or None if unavailable.

@dataclass
class DialecticResponse:
insight: str # Honcho's analysis
session_scope: SessionScope
query: str # Original question

Location: src/youlab_server/honcho/client.py:310-363

Messages are persisted asynchronously without blocking the chat response:

from youlab_server.honcho.client import create_persist_task
# In chat endpoint - doesn't block response
create_persist_task(
honcho_client=honcho,
user_id="user123",
chat_id="chat456",
message="User's message",
is_user=True,
chat_title="My Chat",
agent_type="tutor",
)
  • If honcho_client is None, persistence is skipped
  • If chat_id is empty, persistence is skipped
  • If Honcho is unreachable, errors are logged but not raised
  • Chat functionality continues regardless of Honcho status

Location: src/youlab_server/server/main.py

Honcho is initialized during service startup:

@asynccontextmanager
async def lifespan(app: FastAPI):
# ... other initialization ...
if settings.honcho_enabled:
app.state.honcho_client = HonchoClient(
workspace_id=settings.honcho_workspace_id,
api_key=settings.honcho_api_key,
environment=settings.honcho_environment,
)
honcho_ok = app.state.honcho_client.check_connection()
log.info("honcho_initialized", connected=honcho_ok)
else:
app.state.honcho_client = None
log.info("honcho_disabled")

The /health endpoint reports Honcho status:

{
"status": "ok",
"letta_connected": true,
"honcho_connected": true,
"version": "0.1.0"
}

Both /chat and /chat/stream persist messages:

  1. User message - Persisted before sending to Letta
  2. Agent response - Persisted after receiving from Letta

VariableDefaultDescription
YOULAB_SERVICE_HONCHO_ENABLEDtrueEnable persistence
YOULAB_SERVICE_HONCHO_WORKSPACE_IDyoulabWorkspace ID
YOULAB_SERVICE_HONCHO_API_KEYnullAPI key (production)
YOULAB_SERVICE_HONCHO_ENVIRONMENTdemoEnvironment
EnvironmentUse CaseAPI Key
demoDevelopment/testingNot required
localLocal Honcho serverNot required
productionProduction deploymentRequired

Messages include metadata for context:

metadata = {
"chat_id": "chat456",
"agent_type": "tutor",
"chat_title": "Essay Brainstorming", # Optional
"user_id": "user123", # Agent messages only
}