Skip to content

Instantly share code, notes, and snippets.

@mutaguchi
mutaguchi / 00_copilot_shell.ps1
Last active April 16, 2025 02:42
copilot_shell.ps1
<#
This script customizes the PowerShell command line to detect when an entered command is not found.
If a command is not recognized, it automatically rewrites the line to call a LLM assistant for help.
To run this script, execute it in the console as follows:
. .\copilot_shell.ps1
このスクリプトは、PowerShellのコマンドラインをカスタマイズし、入力されたコマンドが見つからない場合に検出します。
コマンドが見つからない場合、自動的にLLMアシスタントに問い合わせる形に行を書き換えます。
コンソールで
. .\copilot_shell.ps1
@mutaguchi
mutaguchi / 00_agentic_reasoning.py
Last active March 16, 2025 11:46
Agentic Reasoning
import aiohttp
import asyncio
import json
import mediawiki
import re
import datetime
"""
reasoning LLMにWikipediaを検索させてからユーザーに回答させるAgentのサンプル実装
<think>タグ内で検索コマンドが生成されるとその時点で推論を中断しfunction callingを行いWikipediaを検索し検索結果を<think>タグ内に埋め込み推論を続行する
@mutaguchi
mutaguchi / 00_LLM_translator.md
Last active May 28, 2024 23:30
LLM_translator.py

llama.cpp serverを使って、入力文字列を翻訳するスクリプト。

使用方法

python LLM_Translator.py 
[-h] 
[-i 入力ファイルパス] [-o 出力ファイルパス] [-p プロンプトファイルパス] [-d 辞書ファイルパス] 
[-u llama.cpp serverのURL] [--quiet] [--no-outfile] [--include-input] [--min-context-count 最小保持ターン数]
function Filter-Duplication
{
[CmdletBinding()]
param (
[Parameter(ValueFromPipeline)]
[object[]]
$InputObject
)
begin
{
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
models = {
"stablelm": "stabilityai/japanese-stablelm-base-gamma-7b",
"chatntq": "NTQAI/chatntq-ja-7b-v1.0",
"mistral": "mistralai/Mistral-7B-v0.1",
"starling": "Nexusflow/Starling-LM-7B-beta",
"antler": "Elizezen/Antler-7B",
}

How to avoid PSAMSIMethodInvocationLogging. In the previously introduced method, a cmdlet is dynamically generated to execute a method by reflection. In this method, the method is executed by reflection by defining a type for the method invocation and having the Hashtable cast to that type. In the example, with Windows Defender real-time protection enabled, a process that would take up to 35 seconds to complete would take about 1 second using this method.

from transformers import GPTJForCausalLM, AlbertTokenizer
import torch
model = 'AIBunCho/japanese-novel-gpt-j-6b'
tokenizer = AlbertTokenizer.from_pretrained(model, keep_accents=True, remove_space=False)
model = GPTJForCausalLM.from_pretrained(
model,
load_in_4bit = True,
torch_dtype = torch.bfloat16,
device_map = 'auto')

Mitigating the Impact of PSAMSIMethodInvocationLogging on PowerShell Performance: An Exploration of the Invoke-Method Cmdlet

Method execution in PowerShell 7.3 and later has been slowed down due to PSAMSIMethodInvocationLogging. This feature executes the AMSI's Logging method by specifying the method information and arguments to be executed just before method execution. Originally, this was an experimental feature, but it has been promoted to an official feature.

The problem of slow operation has been partially solved, but still, with Windows Defender's real-time protection enabled, especially when the argument size is large, the execution of methods within a loop is extremely slow. PowerShell/PowerShell#19431

(Measure-Command{$a='x';$b='a'*1000;foreach($i in (1..1000000)){$y=$a.Contains($b)}}).TotalSeconds
@mutaguchi
mutaguchi / Test_All_Any.ps1
Created May 30, 2023 09:59
Test-All, Test-Any
function Test-All
{
[CmdletBinding()]
param(
[scriptblock]
$Predicate,
[Parameter(ValueFromPipeline)]
[PSObject]
$InputObject
)
# based on StableLM chat
# https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat
import gradio as gr
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
import time
import numpy as np
from torch.nn import functional as F
import os