Skip to content

Instantly share code, notes, and snippets.

# myabs(x::Float64) = sign(x) * x
import Base.:-
import Base.:+
# convert from 0 based index to julia 1 based index
from0basedIdxToJulia(idx::Int) = idx + 1
@anarazel
anarazel / latency.md
Last active February 12, 2025 14:00
SSD durable write latency

Collecting latency information for durable QD 1 writes, measured using

fio --directory /mnt/t2/fio/ \
  --runtime 3 --time_based \
  --output-format json \
  --overwrite 1 --size=8MB --buffered 0 --bs=4096 --rw=write \
  --name write-dsync --wait_for_previous --sync=dsync \
  --name write-fdatasync --wait_for_previous --fdatasync=1 \
  --name write-nondurable --wait_for_previous \
 | jq '.jobs[] | [.jobname, .write.iops]'

This article gives information about mathematical details which come up when one tries to build a realistic space simulation game with realistic physics. Focus here is on physics of various kinds, not graphics or networking because these topics are well treated at other places.

Example code is given in C#, because the language is high level and easy to understand and fast enoguh for copy and paste.

How to compute the thrust of multiple thrusters to fullfill the goal to accelerate in a direction or increase the rotational velocity?

This is a problem of mathematical optimization, which is related to operational research. One good method is to use optimization in the form of https://en.wikipedia.org/wiki/Linear_programming , more specifically the simplex algorithm .

@patham9
patham9 / airis.py
Last active December 27, 2023 17:28
Berick Cook's AIRIS replicated by Patrick Hammer from his video and discussions with him.
from collections import deque
from copy import deepcopy
import sys
import time
import random
# THE WORLD
world = """
oooooooooooo
o o f o
@kevinkirkup
kevinkirkup / AI_ML_Reference.md
Last active April 24, 2024 16:26
AI/ML Reference
@patham9
patham9 / catbluesky.metta.rkt
Last active September 28, 2023 04:34
Simple NARS in MeTTa (with a selected NAL1-5 subset and exhaustive-until-depth multistep inference)
;; Stdlib extension
(= (max $1 $2) (if (> $1 $2) $1 $2))
(= (min $1 $2) (if (< $1 $2) $1 $2))
(= (TupleConcat $Ev1 $Ev2) (collapse (superpose ((superpose $Ev1) (superpose $Ev2)))))
;; Truth functions
(= (Truth_c2w $c) (/ $c (- 1 $c)))
(= (Truth_w2c $w) (/ $w (+ $w 1)))
(= (Truth_Deduction ($f1 $c1) ($f2 $c2)) ((* $f1 $f2) (* (* $f1 $f2) (* $c1 $c2))))
(= (Truth_Abduction ($f1 $c1) ($f2 $c2)) ($f2 (Truth_w2c (* (* $f1 $c1) $c2))))
@veekaybee
veekaybee / normcore-llm.md
Last active April 19, 2025 17:24
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

training data to finetune a LMt

purpose of the LMt:

  • input is typically a goal and the LM has to figure out commands on how to realize the goal (just like AutoGPT).

source of the data:

  • some of the data was generated with StarCoder with specific prompts
@pdtgct
pdtgct / convert_hf_llama_to_ggml.md
Created April 28, 2023 15:27
Convert HF to GGML

The LLaMA model weights may be converted from Huggingface PyTorch format back to GGML in two steps:

  1. download from decapoda-research/llama-7b-hf and save as pytorch .pth
  2. use the ggerganov/llama.cpp script, convert-pth-to-ggml.py to convert from pytorch .pth to GGML

This process will result in ggml model with float16 (fp16) precision.

Prerequisite

@patham9
patham9 / GPTNARS.py
Last active April 12, 2023 18:21
GPTNARS
"""
* The MIT License
*
* Copyright 2023 The OpenNARS authors.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is