Corrections in. Judgment out.
Every edit you make teaches the brain. The graduation pipeline promotes durable lessons into rules, and clusters of rules into meta-rules you can export and share.
from gradata import Gradata
brain = Gradata(profile="writing")
draft = llm.generate(prompt)
final = human_edit(draft)
# Every edit is a lesson. Severity is measured
# via edit distance; rules graduate automatically.
brain.correct(draft=draft, final=final, task="reply")
# Next time, matching rules inject into the prompt.
next_draft = llm.generate(prompt, context=brain.context_for("reply"))The first time you correct something, it's logged as an event with severity and edit-distance metadata.
Repeated corrections on the same shape promote the lesson. Severity-weighted survival boosts confidence.
Durable lessons graduate to rules and get injected into matching tasks (max 10 per session, scope-matched).
Rules that share structure collapse into meta-rules — the compressed principles behind your judgment.
Matching rules are injected as structured context at prompt-time — no fine-tuning, no model upload, works across Claude, GPT, Gemini, or local models. Scope-matched per task. Primacy/recency positioning. Max 10 per session.