PROOF OVER PROMPT FRAMEWORK
Proof over Prompt™
AI-Resistant Curriculum Design
What BMW's quality systems taught us about AI-resistant assessment design in education.
The manufacturing sector solved the quality inspection problem decades ago: instead of detecting defects after production, design processes where defects cannot occur. Proof over Prompt transfers this philosophy — specifically FMEA and poka-yoke — to K-12 curriculum design. The result: assessment pipelines where AI submission is structurally impossible, without requiring detection tools.
Detection Is a Dead End
Most schools fight AI-generated student work with detection tools — AI checkers, plagiarism scanners, statistical analysis. This is the equivalent of quality inspection at the end of a production line: expensive, unreliable, and always one step behind.
Manufacturing abandoned this model decades ago. Education hasn't caught up yet.
Don't inspect after failure.
Design to prevent failure.
From Manufacturing to Education
Principle 1: LMEA (Learning Mode and Effects Analysis) 📋
Educational adaptation of FMEA. Systematically identifies failure points where AI-generated submissions could bypass learning objectives. Every assessment mapped, every vulnerability addressed.
Principle 2: Poka-yoke Checkpoints ✋
Assessment structures require process artifacts — drafts, reasoning logs, self-corrections, live defense — that are impossible to produce retroactively. Error-proofing by design.
Principle 3: Clean Pipe Principle 🔧
A correctly designed assessment pipeline structurally excludes AI-generated work without requiring detection tools. "Kirli borudan temiz su akmaz."
Principle 4: Evidence by Design 📐
Three-layer evidence taxonomy: Process (how it was made), Product (what was made), Communication Strategy (how it's defended).
Principle 5: Red Line (Bloom Level 3+) 🔴
Below Bloom L3: AI-assisted tasks acceptable. Above Bloom L3: AI-supervised zone — human proof required at every checkpoint.
Deployed, Not Theoretical
PoP has been deployed across 43 production-locked curriculum files covering a major ELT publisher's secondary English series (CEFR A2-B1) at a K-8 school in Istanbul. Each file embeds:
4
Canonical Checkpoints
2
CEFR-variant Student Pathways
60/40
Process/Product Weighting
3
Stakeholder Support Layers
Iterative deployment across three textbook units demonstrates measurable quality convergence in assessment design consistency:
Quality Convergence (U1 → U2 → U3)
Direction: → Each iteration gets cleaner. The system learns.
Research & Publication
PoP is not just a practitioner tool — it's an academic contribution to the intersection of quality engineering and educational assessment.
✅ EC-TEL 2026 abstract submitted (Springer LNCS, Valencia)
Track: Industry & Practitioner Reports
⬜ EC-TEL 2026 full paper — April 3, 2026
⬜ Article A: Quality Assurance in Education (Emerald) — 2026
⬜ Article B: Assessment in Education (T&F) — 2027
⬜ EC-TEL 2026 (September 14-18, 2026, Valencia, Spain)
The PoP Dictionary
N=N Principle
N students = N learning paths. No one-size-fits-all.
LPA
Latent Potential Actualization — every student has untapped capacity.
AI Swarm
5-agent architecture (Input, Scaffold, Practice, Monitor, Motivation).
Zero-Defect Pedagogy
Prevention by design, not detection after the fact.
Teacher-as-QA
The teacher is the quality engineer, not the content deliverer.