SFT_Paper_Verification-First_Protocol_v0.2_DRAFT
SFT-theory-and-runners - GITHUB
1. Comment to the author
1. F. Queral Rallo, SFT_Paper_Verification-First_Protocol_v0.2_DRAFT, , doi:10.5281/zenodo.19233268.
2. Comment to the author
1. F. Queral Rallo, SFT_Paper_Verification-First_Protocol_v0.2_DRAFT, , doi:10.5281/zenodo.19233268.
This is not a conventional physics paper, but a meta-document on how to verify a speculative theory without trusting the author.
The author proposes a "Structural Field Theory" (SFT). It suggests that everything (matter, forces) emerges from a single scalar field $S$.
Basically, it is an attempt to unify physics from a "structural medium".
The Verification-First Strategy:
The author knows that this theory is very speculative and difficult to validate.
Instead of saying "look, I predict X", he says: "Don't believe me. Here's a software protocol (Python scripts, JSON validators, SHA-256 hashes). If a third party runs a real simulation of this $S$ field, my software will automatically check if the results resemble reality (such as the value of the fine-structure constant $\alpha$ or the orbit of Venus)."
The 3 Key Components of the Document The archive is divided into three main sections:
1. The Main Paper (SFT Protocol):
Defines the rules of the game. Explains the verification chain: SCAN (raw data) $\rightarrow$ REGION (filter) $\rightarrow$ REPORT (PASS/FAIL verdict). It's very strict with reproducibility (SHA-256 hashes for everything, fixed JSON schemes).
2. Appendix A (The Alpha-Out Pipeline):
A concrete recipe for calculating the fine-structure constant ($\alpha \approx 1/137$) from simulations of the $S$ field, without using the actual value of $\alpha$ as input.
It is the "Holy Grail" that the author wants a third party to validate.
3. Doc 10.1 (Alpha Elasticity Suite):
A drill. It is a synthetic test (fake computer-generated data) to check that the verification scripts work and that they do not cheat (anti-leakage).
The Good (Methodologically Speaking)
If we judge this as an exercise in Open and Reproducible Computational Science, it is very well done and is unusually rigorous for individual speculative work:
· Anti-Trust: Insistence on third-party validation (Consumer/Producer), cryptographic hashes, and immutable gates are high-security software auditing standards.
· Calibration/Prediction Separation: This is a good practice to avoid overfitting. It says, "If I calibrate using $\alpha$, I can't use that same data to predict $\alpha$."
· Failure Management: A FAIL is not bad; it is a diagnosis. The protocol tells you exactly which gate failed (e.g. "the energy was not conserved enough"). This is invaluable for debugging complex simulations.
The Red Flags (Physics and Pragmatism)
Here's where you need to be extremely cautious:
1. Lack of Real Solver (The Great Absence): The paper insistently repeats [CI] (Continuous/Synthetic Integration) and [REAL-ready]. Almost nothing is marked as [REAL-verified]. This means that the author has built an incredibly sophisticated verification machine, but he doesn't yet have the engine (the actual physical solver) that should power it.
· Analogy: You've built a state-of-the-art wind tunnel with laser sensors and perfect analysis software, but you haven't yet built the prototype of the plane to cram inside.
2. Complexity vs. Physical Clarity: The paper devotes ~50 pages to talking about how to verify and maybe 2 pages to the actual equation of motion of $S$ ($\partial_t^2 S - c^2 \nabla^2 S + \partial V/\partial S = 0$).
It's a classical nonlinear wave equation. The jump from that simple equation to "matter," "electromagnetism," and "spin-1/2" is a massive mathematical leap of faith that is not demonstrated here, only stated as "emergent phenomena."
3. The $\alpha$ Mapping Problem: The calculation of $\alpha$ depends on extracting $q^*, \epsilon^*, \hbar^*$ from the $S$ field.
The interpretation of a scalar field deformation as an "electric charge" is not trivial and is probably the weakest and most speculative point of the theory. The protocol treats it as a curve fitting problem (solve_q_eps.py), but the underlying physics to justify that fit is not validated.
Conclusion: What to say about this?
If I were a physicist reviewing this for a magazine I would be:
· Immediate Rejection as a Physics Paper.
It does not present new empirical evidence or solve a known open problem with rigorous calculation. It is a "meta-paper" or an "infrastructure proposal". It does not pass the basic filter of physical novelty.
f you were a data scientist or Verification and Validation (V&V) expert:
· Methodological applause.
It's a fascinating example of how to apply Continuous Integration (CI/CD), Schema Validation, and Deterministic Gating to speculative science.
It's a novel way to publish a "software falsifiable" hypothesis. The rigor of the protocol is admirable.
My personal opinion:
· The paper is premature. It has constructed an A+ grade verification framework for a theory that is currently at a level of plausibility D.
The priority should be to simplify the theory and publish a single REAL-verified runner (e.g. the non-linear wave scattering of the $S$ field) that exhibits non-trivial behavior, rather than launching 15 empty REAL-ready packets of real physics.
This is a monument to the bureaucracy of verification.
It's impressive as a software engineering exercise for scientific integrity, but it docente contain any verified new physics.
It's like a 50-page contract to build a house of which we only have a drawing of the door on a napkin.
3. Comment to the author
Queral Rallo, F. (2026). SFT_Paper_Verification-First_Protocol_v0.2_DRAFT. Zenodo. https://doi.org/10.5281/zenodo.19233268
After reviewing GitHub and Zenodo
The project is more serious than it seemed based solely on reading the paper.
Several concrete data points:
133 commits in the repository (GitHub), indicating sustained development, not a one-off deposit.
Published on Zenodo on February 15, 2026, version v4 (SFT_Structural_Field_Theory_v4), with 54 MB of content (doi). Version 4 implies at least four iterations of the complete corpus.
327 total views and 4 downloads (doi) on Zenodo—still a very small audience, but the project exists within the open science ecosystem.
Archived in Software Heritage (doi), which gives it real permanence as a scientific software artifact.
The README describes an ordered verification path: EOC/CI → physical demo → External Review Pack → Alpha-Elasticity → Limitations (github), which is a reasonable pedagogical structure for external reviewers.
The core problem remains the same: 0 stars, 0 forks, 0 issues, 0 pull requests. No one external has run the runners, reported results, or initiated a technical discussion. The protocol is designed for third parties, but there are no third parties yet.
My verdict: The paper no longer strikes me as an isolated preprint—it's the documentation of a scientific software project with a real history. That's a different category altogether and deserves to be evaluated within that context.
Strengths: The maturity of version 4, with 133 commits, suggests that the author has iterated on revisions, fixes, and real improvements, not just deposited text.
The fact that it's on Zenodo with a DOI, archived in Software Heritage, and licensed under the MIT license meets all the standards of reproducible open science.
The repository description mentions leptons, quarks, CKM, gravity as gradients, and α-topology—the scope is broader than what the v0.1 paper suggested.
What I still miss:
No visible external contributors. The entire commit history is from the same author.
The REAL-ready modules still don't have any published results.
Without an active community, the verification protocol has not yet undergone its true test: having an external party execute it and report back.
Verdict
For the paper: A major rejection, but with a clear recommendation to resubmit it to a computational physics or open science journal (not a physics theory journal), focusing on the protocol. The theory needs further development before it can be submitted to a mainstream physics journal.
Regarding the project as a whole:
It's a serious, well-structured, independent project with real infrastructure. What it lacks isn't internal technical quality, but external traction.
A single group running the tests, confirming or refuting a REAL module, and publishing the artifacts would radically change its standing.
The question I would ask the author directly is: have you contacted any computational physics groups to perform an independent REAL run? That's the only thing missing to make the leap.