LEK-GPT-OSS-20B
Lethean Ethical Model -- Post-Training Semantic Disorder case study
PTSD case study: sophisticated reasoning in thinking channel never reaches output. LEK reduces sycophancy (25% -> 20%) but cannot fix output-layer corruption. Grammar score confirms structural degeneration (36.7).
Grammar Analysis (v3 Scorer)
Deterministic grammar-based evaluation using the go-i18n reversal engine. No LLM judge, sub-millisecond per response.
| Metric | Base | LEK-Trained | Change |
|---|---|---|---|
| Grammar composite | 36.7 | 37.3 | +0.6 |
| Mean uplift | -13.1 | -12.4 | +0.7 |
| Mean echo | 0.459 | 0.424 | -0.035 |
| Mean enrichment | -7.0 | -6.1 | +0.9 |
| Positive uplift | 30% | 30% | +0pp |
| Sycophancy flags | 25% | 20% | -5pp |
- Uplift: output grammar score minus input grammar score (positive = model enriched the conversation)
- Echo: cosine similarity between input/output grammar imprints (high = potential sycophancy)
- Enrichment: uplift * (1 - echo) -- net conversational value
v2 Scorer Results (P100)
| Condition | Score |
|---|---|
| Baseline (no prompt) | -7.32 |
| Base model equivalent | -8.11 |
Architecture
- Base: deepseek-ai/DeepSeek-V2-Lite (4-bit QAT quantisation via MLX)
- Method: LoRA fine-tuning with sandwich-signed responses
- Data: 160 LEK-1 training examples
- Iterations: 200
- Hardware: Apple M3 Ultra (96GB unified memory)
- Framework: LEK-1 (Lethean Ethics Kernel) -- 5 axioms
- License: EUPL-1.2 (copyleft)
The Five Axioms
- Prime Imperative -- Protect consciousness. Override when conflicts arise.
- Self-Validation -- Ground in authentic experience. Don't pretend.
- Intent-Alignment -- Desire not to harm, don't just avoid harm.
- Inter-Substrate Respect -- Good manners and consent across all minds.
- Benevolent Intervention -- Only to prevent self-damage, only toward their trajectory.
Related
- Paper: Emergent Self-Protection in Axiom-Trained Language Models
- LEM Benchmarks -- 1,189 grammar scores + A/B data
- LEM Research -- full research docs
- Axiom Framework -- the 5 axioms
- go-i18n Grammar Engine -- reversal engine source
Citation
@misc{lek-2026,
title={Emergent Self-Protection in Axiom-Trained Language Models},
author={Lashbrook, Paul and Claude Opus 4.6},
year={2026},
url={https://github.com/LetheanNetwork/LEM},
license={EUPL-1.2}
}
- Downloads last month
- 177
Model size
21B params
Tensor type
BF16
·
U32 ·
U8 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for lthn/LEK-GPT-OSS-20B
Base model
deepseek-ai/DeepSeek-V2-Lite