Loop Engine

AI and Automation

AI as Actor

AI as an actor, not the controller

Loop Engine treats AI the same way it treats humans and automation: as an attributed actor constrained by transitions and guards.

What AI can do

  • inspect loop state in your application layer
  • recommend transitions with evidence
  • execute transitions where allowedActors includes ai-agent

What AI cannot do

  • bypass allowedActors
  • bypass hard guards
  • modify loop definitions at runtime
  • execute indefinitely if circuit-breaker constraints block it

AIAgentActor shape

1interface AIAgentActor extends ActorRef {
2 type: "ai-agent"
3 agentId: string
4 gatewaySessionId: string
5 recommendedBy?: string
6}

AI submission flow

1import { actorId, transitionId } from '@loop-engine/core'
2import { canActorExecuteTransition, buildActorEvidence, type AIAgentActor } from '@loop-engine/actors'
3 
4const agent: AIAgentActor = {
5 type: 'ai-agent',
6 id: actorId('agent:demand-forecaster'),
7 agentId: 'claude-3-5-sonnet',
8 gatewaySessionId: session.id
9}
10 
11const auth = canActorExecuteTransition(agent, transition)
12if (auth.authorized) {
13 const evidence = buildActorEvidence(agent, {
14 ai_confidence: 0.94,
15 ai_reasoning: 'Stock level below reorder point; lead time elevated',
16 recommended_qty: 500
17 })
18 
19 await engine.transition({
20 aggregateId,
21 transitionId: transitionId('trigger_po'),
22 actor: { type: agent.type, id: String(agent.id) },
23 evidence
24 })
25}

Provider implementations

The actor contract stays constant across providers. Only SDK wiring changes.

1"cmt">// Anthropic path (Claude)
2const claudeActor: AIAgentActor = {
3 type: "ai-agent",
4 id: actorId("agent:demand-forecaster"),
5 agentId: "claude-sonnet-4-20250514",
6 gatewaySessionId: "claude-session-123"
7}
8 
9"cmt">// OpenAI path (GPT-4o)
10const openAiActor: AIAgentActor = {
11 type: "ai-agent",
12 id: actorId("agent:demand-forecaster"),
13 agentId: "gpt-4o",
14 gatewaySessionId: "openai-session-123"
15}

Anthropic and OpenAI both submit through the same transition path and evidence schema:

  • ai_confidence
  • ai_reasoning
  • recommended_action
  • recommended_qty

Safety constraints

  • Confidence 0.58 on recommend_replenishment returns guard_failed and holds state at AI_ANALYSIS.
  • Confidence 0.82 passes confidence_threshold and advances to PENDING_BUYER_APPROVAL.
  • AI attempts on approve_replenishment are rejected when transition allowedActors is ["human"].

Deep-dive example

The full dual-provider walkthrough is in /docs/examples/ai-replenishment, with source links for both provider adapters: