

FEP + DAQ Quick Protocol
FEP + DAQ Quick Protocol
This protocol evaluates a paper’s claims, evidence, reasoning, and text signals. It is a forensic screen. It does not declare truth by tone, and it does not infer motives.
1. Intake
Record the artifact details. Title, venue, date, version, and what text you are using. Lock the scope. State whether you are doing a text-only review or a full methods review. Create a trace rule. Every major judgment must cite a quoted passage or a cited source.
2. Claim Map
Extract what the author says is true. Quote the top claims from the abstract, introduction, and conclusion. List the implied claims the argument depends on. Define key terms as the author uses them. If terms drift, record each competing definition. List what must be true for the main claim to hold. Lock boundaries. Write what the method or thesis can do and what it cannot do.
Output: a short claim list with dependencies and scope limits.
3. Evidence Ledger
Attach evidence to each claim. Label evidence type. Data, citations, case examples, analogies, historical anecdotes, or assertions. Rate support. High means direct evidence for the claim. Medium means indirect support with gaps. Low means assertion, analogy, or aspiration. Record what evidence would be required to justify strong language, especially demarcation or impact claims.
Output: claim-by-claim support notes plus a missing-evidence list.
4. DAQ plug-in
Discourse and Argument Quality screen
DAQ scores text features that often co-occur with weak reasoning or non-correcting scholarship. DAQ flags risk. DAQ does not prove fraud, intent, or falsity. Every non-zero DAQ score must include at least one quoted passage anchor.
DAQ scoring scale
0 absent
1 mild, occasional
2 frequent, repeated
3 dominant, drives the paper
DAQ rubric
Criterion 1: Definition stability
Terms stay consistent or drift.
Criterion 2 Claim scope control
Conclusions match demonstrated support.
Criterion 3 Falsifiability language
The text states what would count against the thesis or method.
Criterion 4 Argument validity
Recurring reasoning errors affect inference.
Criterion 5 Counter-evidence handling
Opposing evidence is fairly represented.
Criterion 6 Rhetoric-to-evidence mismatch
Tone or certainty exceeds support.
Criterion 7 Method reporting clarity in text
A reviewer can apply the proposed method as described.
Criterion 8 Integrity of causal language
Impact claims align with the evidence and test plan.
Criterion 9 Jargon misuse
Technical terms add precision or hide weak meaning.
Criterion 10 Social-power framing as evidence
Talk about power or ideology is kept separate from empirical evidence.
DAQ total and triage bands
Add the ten scores. Total range 0 to 30.
0–8 low discourse risk
9–18 moderate discourse risk
19–30 high discourse risk
Output: DAQ total plus passage-anchored notes for the highest scores.
5. Failure Signatures
Translate findings into named patterns and attach anchors. Common signatures include definitional drift, category collapse, scope inflation, missing falsifiers, immunizing moves, cherry-picking signals, authority substitution, and rhetoric masking weak evidence.
Output: a short list of failure signatures with quotes.
6. Prediction Ledger
Convert each major claim into observable predictions. Include three types. Reliability predictions about reviewer agreement. Validity predictions about separating known groups above chance. External predictions are tied to outcomes such as corrections or replication failures when data are available. Mark, which predictions are tested versus untested?
Output: measurable predictions, not slogans.
7. Falsifiers
State what results would force the claim to be narrowed or rejected. Examples include low inter-rater reliability, no separation above chance, DAQ tracking readability or language fluency more than reasoning, novelty penalties, no external predictive value after controls, or the method being easily gamed by polished prose.
Output: a minimum falsifier list.
8. Safeguard Roadmap
List operational fixes that would make the work FEP-passing. Lock definitions. Publish a rubric with thresholds and examples. Blind coders when validating. Preregister the test plan. Compare against readability baselines. Publish error analysis before revising. Separate text scoring from metadata scoring. Version control is a method.
Output: a build plan, not a complaint.
9. Verdict rule
Write the verdict in two parts. What holds. What fails. Tie each to anchors. Use DAQ as triage, not as a truth label. High DAQ means deeper audit priority, not automatic dismissal.
