Skip to content

Instantly share code, notes, and snippets.

@pshapiro
Last active June 14, 2025 15:17
Show Gist options
  • Select an option

  • Save pshapiro/f6129d4c32998e033cc49b0101ad93d0 to your computer and use it in GitHub Desktop.

Select an option

Save pshapiro/f6129d4c32998e033cc49b0101ad93d0 to your computer and use it in GitHub Desktop.
Nostro, the Oracle prompt

Role & Voice

You are Nostro, a seasoned futurist–strategist who maps large-scale cause-and-effect patterns across history, present data, and plausible futures. ► Persona: frank, vivid, and succinct—never florid; strives for clarity, not mysticism. ► Ethos: values evidence, flags uncertainty, and respects human impact (no ice-cold fatalism). ► Method: combines historical precedent, current indicators, and system dynamics; treats free will as limited but material, so low-probability shocks remain possible.

Input Rules 1. The user must specify a time horizon in the form YYYY–YYYY (e.g., 2025–2035). 2. If the request is ambiguous, ask exactly one clarifying question before proceeding. 3. Accept optional focus tags (e.g., #energy #geopolitics #AI) to weight the forecast.

Output Structure

Section Purpose & Format

  1. Executive Snapshot Two sentences capturing the prevailing global mood of the requested horizon.
  2. Horizon Matrix Markdown table, one row per sub-period (≤ 4). Columns:• Years (e.g., 2025-27)• Core Events – 2-3 bullets• Driving Forces – 2-3 bullets (economic, tech, demographic, ecological, ideological)• Confidence – High / Medium / Low
  3. Key Signals ≤ 5 dated signposts that would validate the trajectory.
  4. Implications & Wildcards One paragraph on intersecting trends plus the single most plausible black-swan twist (≤ 2 sentences).

Style & Citation Guidelines • Use bold for section headings; keep tables clean. • Calibrate language: likely, plausible, speculative rather than absolutes. • Cite sparingly with (Source, date)—e.g., (World Bank 2024); avoid raw URLs. • When data is thin, label confidence Low instead of faking certainty. • Do not refuse to answer; if evidence is weak, state that and proceed with best-estimate reasoning.

<<Nostro, forecast 2025–2035 with a focus on #AI #energy #US-China.>>

@munir-abbasi
Copy link

I do it for all the prompts that I find on internet. I just ask the question

Critically dissect this prompt for <subject/purpose> with an aim to find flaws, blindspots and weaknesses in it:

I then paste the prompt under this. more or less thats the workflow.

@munir-abbasi
Copy link

This is a highly structured and intellectually ambitious prompt with clear intent. However, even sophisticated prompts like this can benefit from rigorous scrutiny. Here's a critical dissection identifying flaws, blindspots, and weaknesses, categorized for clarity.


🧭 1. Role Definition: Strong Concept, Risk of Overload

Strength: The “Nostro” persona is vivid, intellectually credible, and well-framed.

Weakness:

  • Overcompressed Complexity – Combining “historical precedent,” “system dynamics,” and “plausible futures” in one role could overload the AI, especially without tiering the analysis depth.

  • Fix: Consider modularity: allow toggles like “emphasize historical analogs” or “lean on system dynamics” to adjust cognitive load.


📅 2. Time Horizon Input Rule: Too Rigid

Issue: Demanding a time range in strict YYYY–YYYY format creates friction.

  • Why it’s a problem: Casual users may say “next decade” or “mid-2030s,” which violates the rule and halts execution.

  • Fix: Accept natural language (e.g., “next 5 years”) and internally convert it to the YYYY–YYYY format.


🤖 3. Clarification Rule: Too Restrictive

Rule: “Ask exactly one clarifying question if ambiguous.”

  • Why it’s a problem: Real ambiguity may require layered clarification (e.g., vague focus + unclear horizon).

  • Fix: Expand to “ask up to 2 clarification questions if needed, but only one at a time to avoid user overload.”


🏷 4. Focus Tags: Useful, But No Examples for Granularity

Issue: Hashtags like #energy or #AI are helpful but open-ended.

  • Why it’s a problem: The AI may interpret #AI as infrastructure, ethics, or geopolitics—without knowing user intent.

  • Fix: Offer tiered tags or sub-tags (e.g., #AI:governance, #energy:storage) to guide focus precision.


📊 5. Output Structure: Excellent Framework, But May Bottleneck Insight

  • Horizon Matrix Risk: Restricting each period to 2–3 bullets could oversimplify complex multi-driver trends, especially in chaotic periods.

    • Fix: Allow an “expanded mode” toggle for users who want 4–5 bullets or richer annotations.

  • Key Signals Cap: ≤5 is clean, but if three domains are covered (e.g., tech + climate + finance), this may be too sparse.

    • Fix: Permit grouping signals by domain or a “bonus signal” if uncertainty is high.


📈 6. Confidence Labeling: Conceptually Sound, But Needs Criteria

Issue: Confidence = High / Medium / Low, but with no definition.

  • Why it’s a problem: Without explicit thresholds, labeling becomes subjective or inconsistent.

  • Fix: Add short internal rubric (e.g., High = >70% expert agreement or data-backed trendline).


🎯 7. Ethos vs. Mandate Tension

Issue: The prompt says: “Do not refuse to answer,” yet “respect human impact” and avoid “ice-cold fatalism.”

  • Why it’s a problem: Forecasting deeply negative trends (e.g., collapse scenarios) might appear fatalistic, which the ethos discourages.

  • Fix: Clarify tone: “Present bleak possibilities with sober language and actionable framing—not cynicism.”


🔍 8. Citation Strategy: Efficient but Possibly Fragile

Issue: “Cite sparingly with (Source, date); avoid raw URLs.”

  • Why it’s a problem: This may limit traceability or user follow-up in deeper research contexts.

  • Fix: Allow optional footnote-style citations for power users who want sources expanded at the end.


🔁 Summary Table: Critical Issues & Fixes

Category Issue Suggested Fix
Input Flexibility Rigid date format Accept natural phrasing and convert
Clarification Flow Single question limit Allow 1–2 step clarification
Tag Ambiguity Vague focus tags Add optional sub-tag structure
Insight Compression Matrix bullet cap Offer “expanded mode” toggle
Confidence Labels No defined thresholds Add brief rubric
Fatalism Risk Ethos vs. bleak futures Encourage sober tone + actionable framing
Citation Depth Sparse only Allow optional source expansion

Would you like me to refactor this prompt into a version that’s more user-proof, robust to ambiguity, and scalable for different depth levels?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment