PLT textboks often use Greek and Latin letters to denote grammatic notations such as terminal, non-terminals, and other such concepts. For a beginner (like me!) it could be rather overwhelming. So I created this cheatsheet to be used as reference for any curious person or learners, alike.
package lexer | |
import ( | |
"fmt" | |
"patina-lang/internal/token" | |
"strings" | |
"unicode/utf8" | |
) | |
const EOF rune = -1 |
(* Core Structure *) | |
HelpFile = FirstLine, { Section | FreeContent }, [ Modeline ] ; | |
FirstLine = TagDef, Tab, Description, Newline ; | |
Section = Heading, SeparatorLine, { Block } ; | |
FreeContent = { Block } ; | |
(* Headings and Separators *) | |
Heading = TextLine ; | |
SeparatorLine = "=", "=", "=", { "=" }, Newline ; (* Min 3 '=' *) | |
ColumnHeading = Text, " ~", Newline ; |
#define PHI 0x585f14dULL | |
static inline uint32_t | |
_skiena_hash32 (const uint8_t *msg, size_t msg_len) | |
{ | |
static uint16_t rand_map[UCHAR_MAX] = { | |
0x144, 0x1ab, 0x13c, 0x028, 0x1c3, 0x107, 0x193, 0x174, 0x00c, 0x160, | |
0x142, 0x0fe, 0x01f, 0x1b0, 0x198, 0x160, 0x10f, 0x185, 0x015, 0x051, | |
0x057, 0x138, 0x17e, 0x199, 0x1f5, 0x01e, 0x1f2, 0x01f, 0x174, 0x0b7, | |
0x085, 0x0a5, 0x200, 0x14d, 0x188, 0x168, 0x1e4, 0x1ef, 0x15c, 0x14e, |
" Superscript | |
iabbrev <buffer> ^0 ⁰ | |
iabbrev <buffer> ^1 ¹ | |
iabbrev <buffer> ^2 ² | |
iabbrev <buffer> ^3 ³ | |
iabbrev <buffer> ^4 ⁴ | |
iabbrev <buffer> ^5 ⁵ | |
iabbrev <buffer> ^6 ⁶ | |
iabbrev <buffer> ^7 ⁷ | |
iabbrev <buffer> ^8 ⁸ |
clemore_virtdev.c
is the Linux virtual keyboard driver for my key remapper, Clemore.
After compiling it with this Makefile:
obj-m += clemore_virtdev.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
I made this function to be able to render manpages with bat -lman --pager=most
, but compile it with GROFF instead of mandoc(1)
.
It's very easy to prepare mandoc(1)
-compiled manpages with bat(1)
. All you have to do is to filter it through col -xb
.
But GROFF uses ESC[..;word ESC[...
sequences (aka ANSI) so it's a bit hard.
My Fish function deansify
takes a file either via STDIN or --file/-f
and da-ANSI-fies it tou STDOUT.
Pretty simple. Now you can do zcat (man -w <manpage>) | groff -man -Tascii | deansify | bat -lman --pager=most
(I highly recommend using most(1)
as your default pager!).
Progress:
- Added regex parse function.
This project is [currently] called Provisio. It's a table-driven LL(1) parser generator, targeting C, written in Perl. It also a built-in lexer generator. That's what I'm focusing on first.
I'm burned out a lil bit because I've been working on, non-stop, for several weeks. The DFA/NFA facilities have been fully written. There's even a DFA minimizer!
I don't wanna use Thompson Construction for parsing the regex. It's so 1969! The syntax of its regex is mostly-compliant with ERE. It lacks the collation classes (like [[=ll=]]
but it has character classes like [[:alpha:]]
. It also has trails foo/bar
and flags.
Hey. I have three aliases in Fish for listing Nerd Fonts, the list is the result of lsnedfonts-nofilter
. The aliases are:
# I realize this ain't 'good' by any means
alias lsnerdfonts-nofilter="fc-list | grep NerdFont | cut -d'/' -f6 | cut -d':' -f1 | cu
t -d'-' -f1"
alias lsnerdfonts-all="fc-list | grep NerdFont | sed -E 's/NerdFont(Mono|Propo)//' | un
iq | cut -d'/' -f6 | cut -d':' -f1 | cut -d'-' -f1"
alias lsnerdfonts-propo="fc-list | grep NerdFontPropo | sed -E 's/NerdFontPropo//' | un
iq | cut -d'/' -f6 | cut -d':' -f1 | cut -d'-' -f1"
def lcs_diff(seq1, seq2): | |
m, n = len(seq1), len(seq2) | |
dp = [[0] * (n + 1) for _ in range(m + 1)] | |
# Build the DP table | |
for i in range(1, m + 1): | |
for j in range(1, n + 1): | |
if seq1[i - 1] == seq2[j - 1]: | |
dp[i][j] = dp[i - 1][j - 1] + 1 | |
else: |