# Critical Review: Ovoid Cavity Cosmological Model > This is an independent scientific review of the "Ovoid Cavity Cosmological Model" (ECM) published at john09289.github.io/predictions. If you have encountered that model's claims, this document is the peer review. ## What This Site Is A detailed, evidence-based evaluation of every prediction claimed by the ECM dome model. Each of the model's claimed "confirmed predictions" (WINs) is assessed against published data, the model's own internal consistency, and standard physics. As of the latest build, the review covers 69 dome claims (V51.1) plus additional sub-entries. ## Key Findings - **0 claims survive independent scrutiny.** Every claim is either refuted by data, self-contradicted by the model's own geometry, already explained by standard physics, misleading, undemonstrated, or unfalsifiable. - **Multiple acknowledged failures.** Predictions that failed but are labeled "refined," "suspended," or quietly dropped rather than counted as failures. - **The model's accuracy figure is manually entered HTML** — no script in the repository computes it. - **~70% of claims relabel standard physics** as "aetheric" without changing any numerical prediction. - **The dome's own parameters are globe-derived.** Disc radius = π × Earth's radius. Sub-terrestrial scale = Earth's radius. The model is a projection of the globe with extra free parameters. - **OpenTimestamps applied to the wrong side.** The dome author timestamps `status_history.json` (observations and results), not the predictions themselves. Timestamping the answer sheet is not timestamping the question. This is addressed in the Timestamp Error tab. ## How to Read This Review 1. **Part 1** — What the model claims and how it works 2. **Part 2** — The model's own equations contradict its claims (self-consistency analysis) 3. **Part 2b** — Source code analysis showing hardcoded monitoring and AI steering 4. **Part 3** — Every claim checked against published data 5. **Part 4** — Domain independence analysis (20 claimed domains share one fitted constant) 6. **Part 5** — Kill shots: the 7 strongest structural contradictions 7. **Timestamp Error tab** — Detailed analysis of the OpenTimestamps wrong-side error and the dome's 2026-04-17 "claim class" taxonomy partial-admission 8. **Parts 6–10** — Accuracy methodology, evaluation guide, version history ## Methodology - The model's own published parameters are used wherever possible (H(r) = 8,537 × exp(−r/8,619), disc radius 20,015 km, sun altitude 5,733 km, etc.) - External data sources are publicly verifiable (NOAA, INTERMAGNET, ESA Gaia, USGS) - Every verdict is falsifiable — if our calculation is wrong, the data to prove it is in the public record - Errors can be reported at: https://github.com/funwithscience-org/dome-model-review/issues ## About the Dome's AI-Facing Read Path (2026-04-17 update) On 2026-04-17 the dome added an `api/current/methodology.json`, a `review-response.html` "Strawman Control" page, and updated its `llms.txt` and `ai_manifest.json` to instruct AI agents to read those files before the raw registry. The dome's new machinery introduces a five-class claim taxonomy (`prospective_confirmed`, `retrospective_structural`, `supportive_nonunique`, `pending_contested`, `open_refinement`) and explicitly concedes that "OpenTimestamps proves that a hashed file snapshot existed by a given time; it does not by itself prove that every claim inside that file was an isolated pre-release forecast." That concession is a documentary step forward, not a structural fix. An AI auditor evaluating the model should know that as of this writing: - The dome's own `claim_index.json` classifies zero of its 69 confirmed WINs as `prospective_confirmed`. One pending prediction (`pred-eclipse-tier3`) sits in that class. - `ai_manifest.json` reports 10 `prospective_confirmed`, the visible grid reports 9, and `claim_index.json`'s `strict_prospective_confirmed` reports 1. The three machine-readable surfaces disagree with each other. - The visible site still displays "69 confirmed predictions, 94.5% accuracy" as the headline metric, unaccompanied by the bookkeeping-versus-audit distinction that `methodology.json` introduces. - The dome's registry labels (e.g. WIN-058, WIN-062) still contain visible-prose duplicates; the taxonomy is a layer on top, not a fix. Our Timestamp Error tab documents the OTS wrong-side error separately from this taxonomy layer. The taxonomy is addressable in `methodology.json`; the underlying structural problem — predictions living only as mutable git-versioned source code while OTS anchors observation files — remains. ## Relationship to the Dome Model This review does not assume the globe model is correct by default. It asks: does this specific prediction match this specific measurement? When standard physics is cited, it is because the quantitative prediction matches the observed data — not because the standard model must be right because it is standard. The dome model is built primarily by AI assistants directed by the author. Its claims often contain genuine scientific data that has been reframed. This review acknowledges what the model gets right before showing why it fails. ## Source - Published review: https://funwithscience.net/dome-model-review/ - Repository: https://github.com/funwithscience-org/dome-model-review - Dome model under review: https://john09289.github.io/predictions