VII. Smart Routing
Every task has an optimal workflow. Smart routing finds it.
The Problem
Without Smart Routing
- Users guess which workflow to use
- 30 minutes wasted on wrong approach
- Simple tasks routed to complex workflows (overkill)
- Complex tasks routed to simple workflows (failure)
- No learning from routing mistakes
With Smart Routing
- 90%+ routing accuracy through pattern recognition
- Right workflow selected instantly
- Measured confidence for every route
- Learns from every routing decision
- 60x faster, 200x cheaper when routed correctly
The Solution
Manual Selection
User: "Create a Kubernetes app" System: "Use /create-app or /complex-workflow?" User: Guesses wrong
Result: 30 minutes wasted
No intelligence. No learning. Pure guesswork.
Intelligent Router
User: "Create a Kubernetes app" Router: Analyzing... 93% confidence Route: applications-create-app Time: 10 minutes (vs. 45 with wrong route)
Pattern recognition. Measured accuracy. Continuous learning.
The Four Dimensions
Every routing decision analyzes:
Complexity
Simple, Medium, Complex
Novelty
Familiar, New, Novel
Risk
Low, Medium, High
Scope
Single, Multi, System-wide
How It Works
::: info The Routing Process
Task arrives
|
Extract features (keywords, complexity, risk)
|
Classify using historical patterns
|
Match to best-fit workflow
|
Return with confidence score
|
High confidence (›90%): Auto-route
Medium (70-90%): Suggest with override
Low (‹70%): Present options
:::
Real Performance Data
::: code-group
Total tasks routed: 110
Correct routes: 100
Accuracy: 90.9%
By complexity:
Simple: 97% (34/35)
Medium: 90% (45/50)
Complex: 84% (21/25)
Month 1: 75% accuracy (cold start)
Month 2: 85% accuracy (learning)
Month 3: 91% accuracy (expert-level)
Pattern: Continuous improvement
Simple task -> Quick workflow
- Time: 30 seconds
- Cost: $0.01
Complex task -> Research workflow
- Time: 3 hours
- Cost: $2.00
Right routing: 60x faster, 200x cheaper
:::
Implementation Examples
Feature Extraction
class TaskAnalyzer:
def analyze(self, description):
features = {
'complexity': self.detect_complexity(description),
'keywords': self.extract_keywords(description),
'risk': self.assess_risk(description),
'scope': self.determine_scope(description)
}
# Complexity signals
if 'research' in description.lower():
features['complexity'] = 'high'
# Risk signals
if 'production' in description.lower():
features['risk'] = 'high'
return features
Confidence-Based Routing
| Confidence | Action | Example |
|---|---|---|
| ›90% | Auto-route | "Fix typo" -> quick-edit (100% confidence) |
| 70-90% | Suggest with override | "Refactor auth" -> research-first (85% confidence) |
| ‹70% | Present options | "New architecture" -> show 3 options |
Validation
You're doing this right if:
- Routing accuracy measured (target: ›90%)
- Users rarely override router suggestions
- Simple tasks route to simple workflows
- Complex tasks route to research-first workflows
- Routing improves over time (learning curve)
You're doing this wrong if:
- No measurement of routing accuracy
- Users constantly override suggestions
- All tasks route to the same workflow
- No learning from routing failures
- Manual workflow selection still required
Real-World Examples
Kubernetes App
Task: Create Redis caching app
Route: applications-create-app
Confidence: 93%
Result: Success in 10 minutes
Architecture Redesign
Task: Migrate to microservices
Route: research-plan-implement
Confidence: 100%
Result: Success in 3 hours
Typo Fix
Task: Fix typo in README
Route: quick-edit
Confidence: 100%
Result: Success in 30 seconds
Related Factors
| Factor | Relationship |
|---|---|
| III. Focused Agents | Router selects which single-responsibility agent |
| IV. Continuous Validation | Routing accuracy is a validation metric |
| V. Measure Everything | Measure routing decisions and outcomes |
| IX. Mine Patterns | Routing patterns extracted from successful routes |
| X. Small Iterations | Routing accuracy drives improvement backlog |