How it works

Corrections in. Judgment out.

Every edit you make teaches the brain. The graduation pipeline promotes durable lessons into rules, and clusters of rules into meta-rules you can export and share.

python
from gradata import Gradata

brain = Gradata(profile="writing")

draft = llm.generate(prompt)
final = human_edit(draft)

# Every edit is a lesson. Severity is measured
# via edit distance; rules graduate automatically.
brain.correct(draft=draft, final=final, task="reply")

# Next time, matching rules inject into the prompt.
next_draft = llm.generate(prompt, context=brain.context_for("reply"))
Python SDK. AGPL-3.0. Works with any model.
01INSTINCTconfidence ≥ 0.40

The first time you correct something, it's logged as an event with severity and edit-distance metadata.

02PATTERNconfidence ≥ 0.60

Repeated corrections on the same shape promote the lesson. Severity-weighted survival boosts confidence.

03RULEconfidence ≥ 0.90

Durable lessons graduate to rules and get injected into matching tasks (max 10 per session, scope-matched).

04META-RULE3+ graduated rules cluster

Rules that share structure collapse into meta-rules — the compressed principles behind your judgment.

Injection, not retraining

Matching rules are injected as structured context at prompt-time — no fine-tuning, no model upload, works across Claude, GPT, Gemini, or local models. Scope-matched per task. Primacy/recency positioning. Max 10 per session.