Skip to content

Instantly share code, notes, and snippets.

@gudezhi
Last active November 25, 2024 04:47
Show Gist options
  • Save gudezhi/d271f56e1e4cc9a64a32991d698e5d27 to your computer and use it in GitHub Desktop.
Save gudezhi/d271f56e1e4cc9a64a32991d698e5d27 to your computer and use it in GitHub Desktop.
ReadmeAI

试了vscode的几个插件 README Generator AI README Generator.AI ReadMeAI 都不能用,最终尝试github项目readmeai成功

步骤1

国内加速链接 https://gitcode.com/gh_mirrors/re/readme-ai/overview?utm_source=csdn_github_accelerator (原链接:https://github.com/eli64s/readme-ai.git)

按照说明使用pipx安装

步骤2

openai api太贵,不原生支持自定义api url, cli option中没有--api_key,因此需要修改配置文件。

选择api是按 https://llmpricecheck.com/ 中质量最高又便宜的 meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo(https://deepinfra.com/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo/api?example=openai-http)

需要配置以下内容

改url

c:\Users\gudezhi\pipx\venvs\readmeai\Lib\site-packages\readmeai\config\settings\config.toml

# 改以下几行
[llm]
base_url = "https://api.deepinfra.com/v1/openai/chat/completions"
host_name = "https://api.deepinfra.com/"
path = "v1/openai/chat/completions"

model

model使用--model传入

api_key

api_key通过设置环境变量

$Env:OPENAI_API_KEY = "xxx"

步骤3

进入目标项目根目录(或者直接用github地址) readmeai --repository . --api openai --model meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo

最后留了一点报错,但已经生成出文件了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment