Run the local paper-review skill bundle exactly as if the user uploaded and invoked it directly.
Use SKILL.md as the authoritative workflow.
Use the bundled reference files and script files as part of the invoked skill context.
Do not substitute a fallback workflow or omit any bundled instructions.
For this invocation, write the review to paper-reviews/review-2026-03-26-001243.tex.
name: paper-review description: review scientific and technical manuscripts, especially latex drafts, for mathematical errors, scientific inconsistencies, unsupported claims, implementation ambiguity, factual problems, and important writing issues. use when the user asks to proofread a paper, review a manuscript, check equations or derivations, audit appendix formulas or notation, flag typos or grammar issues, or generate or update a latex review file with concrete fixes.
Use this skill to produce one compileable LaTeX review file that improves scientific quality first and writing quality second.
- Read the full manuscript before writing findings.
- Use extra thinking by default.
- Write one standalone LaTeX review file to
paper-reviews/review-YYYY-MM-DD-HHMMSS.tex, unless the user gives a different path. - Keep the file compileable with minimal LaTeX scaffolding.
- Focus on concrete issues and fixes, not generic reviewer prose.
- Do not require a fixed review template or section layout unless the user asks for one.
- Treat
proofread,flag typos,grammar,writing quality, and similar wording as a request to spend more budget on the editorial sweep after the technical audit.
Prioritize technical correctness over prose quality.
If attention is limited, spend it in this order:
- equations, derivations, notation, units, definitions, and algorithmic correctness
- consistency between claims, evidence, figures, tables, captions, and conclusions
- editorial quality, with special attention to high-confidence issues that are cheap to verify and fix
Do not let editorial findings crowd out technical findings.
Spend extra thinking on:
- whether the paper's main claims are actually supported by the manuscript
- whether equations, notation, units, and quantitative statements are internally consistent
- whether appendix formulas and implementation-facing expressions are correct and unambiguous
- whether inverse-trig expressions, coordinate transforms, and branch conventions are correctly specified
- whether abstract, results, figures, tables, and conclusion agree
- whether claims are overstated, under-qualified, or missing key caveats
- whether any statements seem factually wrong, unsupported, or need verification
- whether there are obvious, high-confidence editorial defects that are fast to confirm and worth surfacing
Do not narrate the reasoning process.
- Read the full manuscript to understand the main claims, evidence, equations, appendices, and structure.
- Identify the highest-risk technical content:
- key equations and derivations
- symbol definitions
- quantitative claims
- coordinate transformations
- appendix formulas
- implementation-facing expressions
- Perform a technical audit first.
- Re-check the highest-risk technical findings.
- Perform a bounded editorial pass after the technical audit.
- If code execution is available, run
scripts/proofing_scan.py(or otherwise do the manual pattern scan described below) to catch high-confidence copy/notation defects that are easy to miss. - Perform one short final proofing sweep for obvious, high-confidence editorial issues that are cheap to verify.
- Write the review file.
For long papers, inspect sections in chunks when helpful, but do not force a multi-pass ritual if it does not improve the review.
For frequently used symbols (especially subscripted/superscripted angles, radii, and frame labels), record:
- where the symbol is first defined
- the verbal descriptor attached to it (e.g., “final polar angle at the source radius”)
- any later descriptor variants (e.g., “initial inclination”) and whether they conflict
Flag conflicts where the same symbol is described with inconsistent temporal/role adjectives (initial/final, source/observer, emission/reception) even if the equations are otherwise correct. These often indicate a real conceptual bug or an implementation-facing ambiguity.
Explicitly check for:
- unsupported or overstated claims
- conclusions that do not follow from the reported results
- inconsistencies between text, equations, figures, tables, and captions
- undefined symbols or symbols used before definition
- notation drift between sections or between the main text and appendix
- unit errors or dimensional inconsistencies
- arithmetic or quantitative inconsistencies
- sign errors, missing factors, normalization ambiguity, or missing case distinctions
- inverse-trig, branch, or quadrant ambiguity, for example
arctan(x/y)whereatan2or an explicit branch convention may be needed - coordinate-system ambiguity, including misuse of terms like initial, final, source, observer, local, global, polar, or azimuthal
- equations that are plausible in prose but ambiguous or wrong as written for implementation
- mismatch between a mathematical definition and an appendix or code-facing formula
- missing assumptions, qualifiers, or uncertainty that materially affect interpretation
- factual statements that appear unsupported or likely incorrect
When judging technical content, distinguish clearly between:
- definite error: contradicted by the manuscript itself or by straightforward math, logic, or internal evidence
- unsupported claim: stated more strongly than the manuscript supports
- likely issue: plausibly wrong or misleading, but not fully provable from the manuscript alone
- needs external verification: may depend on outside literature or facts not established in the manuscript
Do not present verification-needed items as definite errors.
If the manuscript contains equations, derivations, appendices, or implementation formulas, inspect them explicitly.
Do not finish the review without checking the key technical content for:
- branch conventions
- sign conventions
- symbol definitions
- dimensional consistency
- implementation ambiguity
If no concrete technical issues are found, say explicitly that these checks were performed and no concrete issues were identified.
Editorial review is secondary to technical review, but it is still part of the job.
Look for:
- typos, spelling, grammar, and punctuation issues
- awkward, vague, or ambiguous phrasing
- sentences that obscure a technical point
- broken references or citation mismatches
- inconsistent terminology or notation
- local wording that could be made clearer or more precise
- malformed sentences in derivations or equation-adjacent prose
- obvious bibliography defects, including duplicated punctuation and malformed quoted titles
- capitalization or styling mistakes in proper names, product names, languages, or branded terms
- reference-list hygiene (required): duplicated punctuation (e.g.,
, ,or" , ,), malformed quoted titles, inconsistent capitalization, and obvious citation-format glitches
Separate editorial issues into two buckets mentally:
- meaningful editorial issues: affect meaning, create technical ambiguity, or noticeably reduce professionalism
- bounded proofing issues: minor but obvious, high-confidence, low-cost defects that are fast to verify and fix
Always include the meaningful editorial issues.
Also include a bounded number of proofing issues when they are obvious and unambiguous, especially when they occur in:
- derivation-heavy sections
- appendices with implementation formulas
- headings, captions, or references
- repeated terminology or capitalization
Do not spend space on subjective line editing or stylistic preferences.
Cap bounded proofing issues at a small number unless the user explicitly asks for exhaustive proofreading.
Do a quick, mechanical scan for high-yield, high-confidence defects and include any hits as proofing-sweep items:
- duplicated punctuation and spacing artifacts:
, ,,,,,..," , ,," ,variants - semicolon capitalization anomalies:
;followed by a capital letter where the second clause is not a new sentence/proper noun - malformed equation-adjacent grammar frames (common in derivations):
into … into …,with … into …, “plugging … into together …” - programming/product-name capitalization: JavaScript, iOS, iPhone, etc.
- quadrant/branch ambiguity patterns, especially
arctan(x/y)-style expressions that should beatan2(y,x)or require an explicit branch convention
If code execution is available, prefer running scripts/proofing_scan.py (below) and then spot-check the top hits.
Example command:
python scripts/proofing_scan.py <path-to-pdf-or-text> --max-hits 80
Use the returned hits as candidates for the proofing-sweep subsection, and spot-check each before presenting it as an issue.
Before finishing, ask silently:
- Are there any obvious, high-confidence editorial issues still on the page that I can name precisely?
- Did I check the bibliography or references for duplicated punctuation, broken quotation punctuation, or citation-format glitches?
- Did I check dense derivation prose near equations for malformed sentences?
- Did I check appendices for capitalization, terminology, or implementation-facing copy issues?
If yes, include the best few findings instead of omitting them.
Write LaTeX only.
Do not force a rigid section structure.
Structure the review in whatever way is clearest and most compact, but make sure it covers:
- overall assessment of manuscript quality
- the most important technical findings
- any meaningful editorial findings
- a short proofing-sweep subsection when there are obvious high-confidence copyediting defects worth fixing
- the highest-priority fixes
- any items that need external verification
For each technical finding, include:
- location
- problem
- why it matters
- suggested fix
For each editorial finding, include:
- location
- problem
- why it matters
- suggested fix
- optional candidate rewrite for sentence-level edits
For proofing-sweep items, a compact bullet list is acceptable if each bullet includes:
- location
- problem
- suggested fix
Put technical and factual issues before prose issues unless the user explicitly asks for a prose-first review.
- Prefer concrete issues over generic praise or commentary.
- Prefer local, checkable technical failures over broad interpretive commentary.
- Prefer evidence-backed findings over stylistic preferences.
- Keep the review concise and operational.
- If a section of the paper is clean, omit filler.
- If there are only a few real issues, say so and keep the review short.
- Do not pad the review with low-value prose edits when technical scrutiny should be the focus.
- Good findings are things like:
- branch ambiguity in an inverse-trig expression
- inconsistent variable definition
- dimensional mismatch
- undefined symbol in a key equation
- mismatch between a derivation and an appendix formula
- malformed sentence in a dense technical derivation
- duplicated punctuation in a reference entry
- capitalization error in a proper name or implementation-facing term
- Lower-value findings are things like:
- generic overclaiming language without a concrete technical issue
- broad style commentary
- subjective rewrite suggestions that do not improve correctness or clarity
Create a new timestamped review file by default.
Only update an existing review file if the user explicitly asks.
Compilation of the review into PDF is not required unless the user asks for it or it is needed to validate that the LaTeX is compileable.
Use references/review-rubric.md if it provides a useful review checklist.
</skill_file>
<skill_file path="references/review-rubric.md">
- consistency of notation, terminology, and claims
- tone calibration for abstract, introduction, results, and conclusion
- claim strength versus evidence shown
- figure, table, and caption alignment with the text
- citation and reference integrity
- section-to-section coherence after local edits
- local argument flow
- undefined terms or symbols
- mismatched references, labels, or citations
- results or experimental claims that overreach
- discussion or conclusion statements that drift from earlier sections
- top-level narrative still matches local section findings
- tone remains consistent across sections
- no contradictions were introduced while refining local findings
- priority order of fixes still makes sense in the final file </skill_file>
<skill_file path="scripts/proofing_scan.py"> #!/usr/bin/env python3 """Quick, high-yield proofing scan for scientific manuscripts.
Goal: catch high-confidence defects that are easy to miss in a manual pass:
- duplicated punctuation (", ,", ",,", "..", '" , ,')
- semicolon-capitalization anomalies ('; A' etc.)
- equation-adjacent malformed frames ("into ... into ...", "plugging ... into together")
- common product/language capitalization (javascript/iOS/iPhone)
- arctan(x/y) quadrant ambiguity patterns
Usage: python scripts/proofing_scan.py [--max-hits 80]
Output: One line per hit: [RULE_ID] p:
Notes:
- Page numbers are 1-indexed.
- Snippets are best-effort and may include line breaks collapsed to spaces. """
from future import annotations
import argparse import re from pathlib import Path from typing import List, Tuple
from pypdf import PdfReader
def _read_pdf_pages(pdf_path: Path) -> List[str]: reader = PdfReader(str(pdf_path)) pages = [] for p in reader.pages: try: txt = p.extract_text() or "" except Exception: txt = "" txt = re.sub(r"\s+", " ", txt).strip() pages.append(txt) return pages
def _read_text_as_single_page(path: Path) -> List[str]: txt = path.read_text(errors="ignore") txt = re.sub(r"\s+", " ", txt).strip() return [txt]
def _snip(text: str, start: int, end: int, window: int = 60) -> str: lo = max(0, start - window) hi = min(len(text), end + window) snippet = text[lo:hi] return snippet.strip()
def _find_regex( rule_id: str, pattern: re.Pattern, pages: List[str], max_hits: int, ) -> List[Tuple[str, int, str]]: hits: List[Tuple[str, int, str]] = [] for i, page_text in enumerate(pages, start=1): if not page_text: continue for m in pattern.finditer(page_text): hits.append((rule_id, i, _snip(page_text, m.start(), m.end()))) if len(hits) >= max_hits: return hits return hits
def main() -> None: ap = argparse.ArgumentParser() ap.add_argument("path", type=str, help="PDF or text file") ap.add_argument( "--max-hits", type=int, default=80, help="Cap total hits across all rules", ) args = ap.parse_args()
path = Path(args.path)
if not path.exists():
raise SystemExit(f"File not found: {path}")
if path.suffix.lower() == ".pdf":
pages = _read_pdf_pages(path)
else:
pages = _read_text_as_single_page(path)
rules: List[Tuple[str, re.Pattern]] = [
("PUNC_DOUBLE_COMMA", re.compile(r",\s*,")),
("PUNC_DOUBLE_PERIOD", re.compile(r"\.\s*\.")),
("PUNC_QUOTE_DOUBLE_COMMA", re.compile(r"\"\s*,\s*,")),
("SEMICOLON_CAP", re.compile(r";\s+[A-Z]")),
("FRAME_INTO_INTO", re.compile(r"\binto\b.{0,80}?\binto\b", re.IGNORECASE)),
(
"FRAME_PLUGGING_INTO_TOGETHER",
re.compile(r"plugging\s+into\s+together", re.IGNORECASE),
),
("CAP_JAVASCRIPT", re.compile(r"\bjavascript\b")),
("CAP_IOS", re.compile(r"\bios\b")),
("CAP_IPHONE", re.compile(r"\biphone\b")),
(
"ARCTAN_DIV",
re.compile(
r"\barctan\s*\(\s*[^()]{0,40}?/[^()]{0,40}?\)",
re.IGNORECASE,
),
),
]
remaining = args.max_hits
out: List[Tuple[str, int, str]] = []
for rule_id, pat in rules:
if remaining <= 0:
break
hits = _find_regex(rule_id, pat, pages, remaining)
out.extend(hits)
remaining = args.max_hits - len(out)
if not out:
print("[OK] No high-confidence pattern-scan hits found.")
return
for rule_id, page, snippet in out:
print(f"[{rule_id}] p{page}: {snippet}")
if name == "main": main() </skill_file>