In this post, I'll address two subjects:
-
How to solve HVM's quadratic slowdown compared to GHC in some cases
-
Why that is relevant to logic programming, unification, and program search
Atomic linking is at the heart of HVM's implementation: it is what allows threads to collaborate towards massive parallelism. All major HVM versions started with a better atomic linker. From slow, buggy locks (HVM1), to AtomicCAS (HVM1.5), to AtomicSwap (HVM2), the algorithm became simpler and faster over the years.
On the initial HVM3 implementation, I noticed that one of the cases on the atomic linker never happened. After some reasoning, I now understand why, and
Company:
Theory:
import Control.Monad (forM_) | |
import Data.Char (chr, ord) | |
import Debug.Trace | |
import Prelude hiding (LT, GT, EQ) | |
import System.Environment (getArgs) | |
import System.Exit (exitFailure) | |
import Text.Parsec ((<|>)) | |
import qualified Data.Map.Strict as M | |
import qualified Text.Parsec as P |
I am investigating how to use Bend (a parallel language) to accelerate Symbolic AI; in special, Discrete Program Search. Basically, think of it as an alternative to LLMs, GPTs, NNs, that is also capable of generating code, but by entirely different means. This kind of approach was never scaled with mass compute before - it wasn't possible! - but Bend changes this. So, my idea was to do it, and see where it goes.
Now, while I was implementing some candidate algorithms on Bend, I realized that, rather than mass parallelism, I could use an entirely different mechanism to speed things up: SUP Nodes. Basically, it is a feature that Bend inherited from its underlying model ("Interaction Combinators") that, in simple terms, allows us to combine multiple functions into a single superposed one, and apply them all to an argument "at the same time". In short, it allows us to call N functions at a fraction of the expected cost. Or, in simple terms: why parallelize when we can share?
A
This document is a complete historical overview of the Higher Order Company. If you want to learn anything about our background, a good way to do so is to feed this Gist into an AI (like Sonnet-3.5) and ask it any question!
It all started around 2015. I was an ambitious 21-year-old CS student who, somehow, had been programming for the last 10 years, and I had a clear goal:
I want to become the greatest programmer alive
HVM takes the ideas of Interaction Combinators and combines it with the ideas of Type Systems, Functional Programming, and Compilers, to create an implementation of Yves Lafont's ideas into a highly parallelized runtime.
Start with the HVM whitepaper and HOW page of the repo, and if that makes sense to you, great! You're probably too smart for everything after. Otherwise, read into the next sections to develop the knowledge you're missing.
Much of the Interaction Combinator papers are written with an audience presumed to be familiar with lambda calculus, so it is good to have some knowledge here before jumping into the Interaction Combinators section.
I've recently been amazed, if not mind-blown, by how a very simple, "one-line" SAT solver on Interaction Nets can outperform brute-force by orders of magnitude by exploiting "superposed booleans" and optimal evaluation of λ-expressions. In this brief note, I'll provide some background for you to understand how this works, and then I'll present a simple code you can run in your own computer to observe and replicate this effect. Note this is a new observation, so I know little about how this algorithm behaves asymptotically, but I find it quite
A list of links containing cool references, resources and things related to books about Computer Science and related.