llama.cpp serverを使って、入力文字列を翻訳するスクリプト。
使用方法
python LLM_Translator.py
[-h]
[-i 入力ファイルパス] [-o 出力ファイルパス] [-p プロンプトファイルパス] [-d 辞書ファイルパス]
[-u llama.cpp serverのURL] [--quiet] [--no-outfile] [--include-input] [--min-context-count 最小保持ターン数]
<# | |
This script customizes the PowerShell command line to detect when an entered command is not found. | |
If a command is not recognized, it automatically rewrites the line to call a LLM assistant for help. | |
To run this script, execute it in the console as follows: | |
. .\copilot_shell.ps1 | |
このスクリプトは、PowerShellのコマンドラインをカスタマイズし、入力されたコマンドが見つからない場合に検出します。 | |
コマンドが見つからない場合、自動的にLLMアシスタントに問い合わせる形に行を書き換えます。 | |
コンソールで | |
. .\copilot_shell.ps1 |
import aiohttp | |
import asyncio | |
import json | |
import mediawiki | |
import re | |
import datetime | |
""" | |
reasoning LLMにWikipediaを検索させてからユーザーに回答させるAgentのサンプル実装。 | |
<think>タグ内で検索コマンドが生成されると、その時点で推論を中断し、function callingを行い、Wikipediaを検索し、検索結果を<think>タグ内に埋め込み、推論を続行する。 |
llama.cpp serverを使って、入力文字列を翻訳するスクリプト。
使用方法
python LLM_Translator.py
[-h]
[-i 入力ファイルパス] [-o 出力ファイルパス] [-p プロンプトファイルパス] [-d 辞書ファイルパス]
[-u llama.cpp serverのURL] [--quiet] [--no-outfile] [--include-input] [--min-context-count 最小保持ターン数]
function Filter-Duplication | |
{ | |
[CmdletBinding()] | |
param ( | |
[Parameter(ValueFromPipeline)] | |
[object[]] | |
$InputObject | |
) | |
begin | |
{ |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
import torch | |
models = { | |
"stablelm": "stabilityai/japanese-stablelm-base-gamma-7b", | |
"chatntq": "NTQAI/chatntq-ja-7b-v1.0", | |
"mistral": "mistralai/Mistral-7B-v0.1", | |
"starling": "Nexusflow/Starling-LM-7B-beta", | |
"antler": "Elizezen/Antler-7B", | |
} |
How to avoid PSAMSIMethodInvocationLogging. In the previously introduced method, a cmdlet is dynamically generated to execute a method by reflection. In this method, the method is executed by reflection by defining a type for the method invocation and having the Hashtable cast to that type. In the example, with Windows Defender real-time protection enabled, a process that would take up to 35 seconds to complete would take about 1 second using this method.
from transformers import GPTJForCausalLM, AlbertTokenizer | |
import torch | |
model = 'AIBunCho/japanese-novel-gpt-j-6b' | |
tokenizer = AlbertTokenizer.from_pretrained(model, keep_accents=True, remove_space=False) | |
model = GPTJForCausalLM.from_pretrained( | |
model, | |
load_in_4bit = True, | |
torch_dtype = torch.bfloat16, | |
device_map = 'auto') |
Method execution in PowerShell 7.3 and later has been slowed down due to PSAMSIMethodInvocationLogging. This feature executes the AMSI's Logging method by specifying the method information and arguments to be executed just before method execution. Originally, this was an experimental feature, but it has been promoted to an official feature.
The problem of slow operation has been partially solved, but still, with Windows Defender's real-time protection enabled, especially when the argument size is large, the execution of methods within a loop is extremely slow. PowerShell/PowerShell#19431
(Measure-Command{$a='x';$b='a'*1000;foreach($i in (1..1000000)){$y=$a.Contains($b)}}).TotalSeconds
function Test-All | |
{ | |
[CmdletBinding()] | |
param( | |
[scriptblock] | |
$Predicate, | |
[Parameter(ValueFromPipeline)] | |
[PSObject] | |
$InputObject | |
) |
# based on StableLM chat | |
# https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat | |
import gradio as gr | |
import torch | |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer | |
import time | |
import numpy as np | |
from torch.nn import functional as F | |
import os |