Koji Operator

Development Progress — Fine-Tuned Instruction Model for CHO Platform

What is Koji?

Koji is the instruction-tuned operator model that powers CHO's decision-making. It translates natural language tasks into structured actions that control the desktop environment, manage memory, execute research, and communicate with users.

Built on an open-weight foundation model, Koji is fine-tuned specifically for agentic workflows—learning to think step-by-step, verify before acting, and chain multiple operations to complete complex tasks.

Development Timeline

v0.1.0November 2025

Initial training phase. Single-turn instruction following. Basic command execution without multi-step reasoning.

v0.1.1December 2025

Introduced multi-turn trajectories. Model learns agentic loop: Act → Receive Feedback → Decide Next Step. Natural language prompts.

v0.1.2January 2026

Production-quality training data with comprehensive scenario coverage. Web development, authentication, deployment, research-first patterns, and multi-file project creation.

Training Approach

VersionFocus
v0.1.0Basic instruction drills
v0.1.1Multi-turn, natural language
v0.1.2Full coverage, research-first
Training Coverage
v0.1.0
v0.1.1
v0.1.2

Behavioral Improvement

Before

User: "Check my files"

→ Generic response, no action

After

User: "Check my files"

→ Lists directory, reports results

Capability Coverage

  • Core system operations (file, memory, terminal)
  • Web development workflows (project scaffolding, components)
  • Research-first patterns (learn before implementing)
  • Multi-file project creation
  • Error recovery and debugging
  • Authentication and database integration
  • Deployment workflows
  • Cross-domain tasks (medical, robotics, science)

Validation

After each training cycle, Koji is tested against real-world scenarios including multi-file projects, research tasks, error recovery, and memory recall to ensure production readiness.

Roadmap

PhaseFocus
CurrentProduction training, full coverage
NextReal-world fine-tuning from feedback
FutureExtended context, longer memory