Skip to content

Instantly share code, notes, and snippets.

@VictorTaelin
VictorTaelin / truly_optimal_evaluation_with_unordered_superpositions.md
Last active December 25, 2024 09:58
Truly Optimal Evaluation with Unordered Superpositions

Truly Optimal Evaluation with Unordered Superpositions

In this post, I'll address two subjects:

  1. How to solve HVM's quadratic slowdown compared to GHC in some cases

  2. Why that is relevant to logic programming, unification, and program search

Optimal Evaluators aren't Optimal

@VictorTaelin
VictorTaelin / hvm3_atomic_linker.md
Last active March 22, 2025 00:56
HVM3's Optimal Polarized Atomic Linker

HVM3's Optimal Atomic Linker (with Polarization)

Atomic linking is at the heart of HVM's implementation: it is what allows threads to collaborate towards massive parallelism. All major HVM versions started with a better atomic linker. From slow, buggy locks (HVM1), to AtomicCAS (HVM1.5), to AtomicSwap (HVM2), the algorithm became simpler and faster over the years.

On the initial HVM3 implementation, I noticed that one of the cases on the atomic linker never happened. After some reasoning, I now understand why, and

@VictorTaelin
VictorTaelin / materials.md
Last active February 15, 2025 18:15
materials

Company:

Theory:

@VictorTaelin
VictorTaelin / tt.hs
Created August 9, 2024 22:46
yet another...
import Control.Monad (forM_)
import Data.Char (chr, ord)
import Debug.Trace
import Prelude hiding (LT, GT, EQ)
import System.Environment (getArgs)
import System.Exit (exitFailure)
import Text.Parsec ((<|>))
import qualified Data.Map.Strict as M
import qualified Text.Parsec as P
@VictorTaelin
VictorTaelin / dps_sup_nodes.md
Last active April 20, 2025 14:33
Accelerating Discrete Program Search with SUP Nodes

Fast Discrete Program Search 2

I am investigating how to use Bend (a parallel language) to accelerate Symbolic AI; in special, Discrete Program Search. Basically, think of it as an alternative to LLMs, GPTs, NNs, that is also capable of generating code, but by entirely different means. This kind of approach was never scaled with mass compute before - it wasn't possible! - but Bend changes this. So, my idea was to do it, and see where it goes.

Now, while I was implementing some candidate algorithms on Bend, I realized that, rather than mass parallelism, I could use an entirely different mechanism to speed things up: SUP Nodes. Basically, it is a feature that Bend inherited from its underlying model ("Interaction Combinators") that, in simple terms, allows us to combine multiple functions into a single superposed one, and apply them all to an argument "at the same time". In short, it allows us to call N functions at a fraction of the expected cost. Or, in simple terms: why parallelize when we can share?

A

@VictorTaelin
VictorTaelin / hoc_historical_overview.md
Last active April 6, 2025 07:05
Higher Order Company: Complete Historical Overview - WIP

Higher-Order Company: Complete Historical Overview

This document is a complete historical overview of the Higher Order Company. If you want to learn anything about our background, a good way to do so is to feed this Gist into an AI (like Sonnet-3.5) and ask it any question!

My Search for a Perfect Language

It all started around 2015. I was an ambitious 21-year-old CS student who, somehow, had been programming for the last 10 years, and I had a clear goal:

I want to become the greatest programmer alive

@Zafnok
Zafnok / LEARN.md
Last active April 18, 2025 23:12
Understanding how HVM works

Understanding how HVM works

Introduction

HVM takes the ideas of Interaction Combinators and combines it with the ideas of Type Systems, Functional Programming, and Compilers, to create an implementation of Yves Lafont's ideas into a highly parallelized runtime.

Resources

Start with the HVM whitepaper and HOW page of the repo, and if that makes sense to you, great! You're probably too smart for everything after. Otherwise, read into the next sections to develop the knowledge you're missing.

Intro to Lambda Calculus

Much of the Interaction Combinator papers are written with an audience presumed to be familiar with lambda calculus, so it is good to have some knowledge here before jumping into the Interaction Combinators section.

  • [Programming Languages](htt
@VictorTaelin
VictorTaelin / sat.md
Last active December 7, 2024 20:59
Simple SAT Solver via superpositions

Solving SAT via interaction net superpositions

I've recently been amazed, if not mind-blown, by how a very simple, "one-line" SAT solver on Interaction Nets can outperform brute-force by orders of magnitude by exploiting "superposed booleans" and optimal evaluation of λ-expressions. In this brief note, I'll provide some background for you to understand how this works, and then I'll present a simple code you can run in your own computer to observe and replicate this effect. Note this is a new observation, so I know little about how this algorithm behaves asymptotically, but I find it quite

@noghartt
noghartt / cc.md
Last active June 8, 2024 06:34
Resources to learn more about Computer Science and related stuffs

Examples

[@import stdlib.http.v1];

(x => y => (x_plus_y: x + y) => );

((A, x))
(f: () -> (A: *, x: A)) =>
  (A, x) = (f ());