{
type: "log",
args: [
{
type: "string",
value: "something interesting"
}
],
executionContextId: 1,hi, i'm daniel. i'm a 16-year-old high school senior. in my free time, i hack billion dollar companies and build cool stuff.
about a month ago, a couple of friends and I found serious critical vulnerabilities on Mintlify, an AI documentation platform used by some of the top companies in the world.
i found a critical cross-site scripting vulnerability that, if abused, would let an attacker to inject malicious scripts into the documentation of numerous companies and steal credentials from users with a single link open.
(go read my friends' writeups (after this one))
how to hack discord, vercel, and more with one easy trick (eva)
Redacted by Counsel: A supply chain postmortem (MDL)
tl;dr: If you want to just know the method, skip to How to section
Clangd is a state-of-the-art C/C++ LSP that can be used in every popular text editors like Neovim, Emacs or VS Code. Even CLion uses clangd under the hood. Unfortunately, clangd requires compile_commands.json to work, and the easiest way to painlessly generate it is to use CMake.
For simple projects you can try to use Bear - it will capture compile commands and generate compile_commands.json. Although I could never make it work in big projects with custom or complicated build systems.
But what if I tell you you can quickly hack your way around that, and generate compile_commands.json for any project, no matter how compilcated? I have used that way at work for years, originaly because I used CLion which supported only CMake projects - but now I use that method succesfully with clangd and Neovim.
| // stb_c_lexer.h 0.01 -- public domain Sean Barrett 2013 | |
| // lexer for making little C-like languages with recursive-descent parsers | |
| // | |
| // This file provides both the interface and the implementation. | |
| // To instantiate the implementation, | |
| // #define STB_C_LEXER_IMPLEMENTATION | |
| // in *ONE* source file, before #including this file. | |
| // | |
| // The default configuration is fairly close to a C lexer, although | |
| // suffixes on integer constants are not handled (you can override this). |
this will deadlock the process if we run it in same process as js
| ################################################################### | |
| Writing C software without the standard library | |
| Linux Edition | |
| ################################################################### | |
| There are many tutorials on the web that explain how to build a | |
| simple hello world in C without the libc on AMD64, but most of them | |
| stop there. | |
| I will provide a more complete explanation that will allow you to | |
| build yourself a little framework to write more complex programs. |
| #!/usr/bin/env python3 | |
| import socket | |
| import struct | |
| import fcntl | |
| with open("/dev/vsock", "rb") as fd: | |
| r = fcntl.ioctl(fd, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") | |
| cid = struct.unpack("I", r)[0] | |
| print("Local CID: {}".format(cid)) |
This aims to be factual information about the size of large language models. None of this document was written by AI. I do not include any information from leaks or rumors. The focus of this document is on base models (the raw text continuation engines, not 'helpful chatbot/assistants'). This is a view from a few years ago to today of one very tiny fraction of the larger LLM story that's happening.
- GPT-2,-medium,-large,-xl (2019): 137M, 380M, 812M, 1.61B. Source: openai-community/gpt2. Trained on the unreleased WebText dataset said to 40GB of Internet text - I estimate that to be roughly 10B tokens. You can see a list of the websites that went into that data set here domains.txt.
- GPT-3 aka davinci, davinci-002 (2020): 175B parameters. There is a good breakdown of how those parameters are 'spent' here [How d
| # $ apt install rdiff | |
| # $ rdiff --help | |
| # Usage: rdiff [OPTIONS] signature [BASIS [SIGNATURE]] | |
| # [OPTIONS] delta SIGNATURE [NEWFILE [DELTA]] | |
| # [OPTIONS] patch BASIS [DELTA [NEWFILE]] | |
| # Options: | |
| # -v, --verbose Trace internal processing | |
| # -V, --version Show program version | |
| # -?, --help Show this help message |
| import { Stats } from './lib/bench.js' | |
| import { SQL } from "bun"; | |
| const pool_size = 4 | |
| /* | |
| CREATE TABLE Test ( | |
| id integer NOT NULL, | |
| PRIMARY KEY (id) | |
| ); |