Skip to content

Instantly share code, notes, and snippets.

@bbartling
Last active September 4, 2025 17:29
Show Gist options
  • Save bbartling/d210e924a2afa0e17415b1d4fc296d78 to your computer and use it in GitHub Desktop.
Save bbartling/d210e924a2afa0e17415b1d4fc296d78 to your computer and use it in GitHub Desktop.
AI Agent Acitivities

Prompt Chaining Activity Diagram

  • Note - It is very experimental but working and subject to change once better methods can be created. TODO research MCP server to burn less tokens currently it uses LOTS of tokens in the context files sent to LLM service.
flowchart TD
  start([Start CLI]) --> askDesc[Prompt description]
  askDesc --> askName[Prompt bog file name]
  askName --> loadCtx[Load context files]
  loadCtx --> attempt{{Attempt == 1?}}

  attempt -- Yes --> gen[LLM generate script]
  attempt -- No  --> fix[LLM fix script - prev code + traceback]

  gen --> write[Write script to temp folder]
  fix --> write

  write --> run[Run script with -o output dir]
  run --> success{Run ok AND file created?}

  success -- Yes --> done[Print success\nPrint stats\nExit]
  success -- No  --> cap[Capture stderr tail as traceback]
  cap --> retry{Attempt < max iters?}

  retry -- Yes --> incr[Increment attempt\nStore code and error]
  incr --> attempt

  retry -- No --> fail[Print failure\nPrint stats\nExit]
Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment