Qt Creator
You take your .cxx and your .hxx files (.c/.h or .cpp/.hpp/.h) and build them to .o files aka object files. Then all the object files get linked together into an executable.
https://pedrotech.co/blog/google-docs-clone-tutorial/
Maps are ordered dicts in javascript
Meetings have some inherent behaviors. If it is administrative to run an organization or some other reoccuring program or auxillary,
Quick AI analysis of this funny history:
Original Post by Benjie Holson on June 10, 2025
https://generalrobots.substack.com/p/a-brief-incomplete-and-mostly-wrong
Quick AI analysis of this funny history:
Original Post by James Iry on Thursday, May 7, 2009
https://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html
SourceTree and VSCode were giving me issues with browsing my git repo without marking all my files as "changed".
This looks like it is caused by having my code in WSL in Ubuntu instead of the code in Windows. Maybe there is some other feature that is off, but for now I am pushing back towards open source awesomeness.
A fork of GitAhead, it is Qt based and pretty easy to install with flatpack.
As far as X11 support or a display for WSL2, Windows 11 ships with Wayland in place.
Well today I went down a rabbit hole of trying to install a faster version of Buzz on Windows.
buzz_whisper.cpp has a github repo that looked pretty good.
https://github.com/richardburleigh/buzz_whispercpp
It only had the python files, and to get the gui, you need to install everything with Poetry.
https://python-poetry.org/docs/#installing-with-the-official-installer
The few routes I've looked at recently are:
96 GB VRAM via 4x Tesla P40 graphics cards in a server grade motherboard + 500 GB of regular RAM, comes out to about $4000, especially with water cooling.
128 GB Unified RAM in a Mac M4 Ultra setup, $10,000. Slower at inference compared to the 96GB VRAM of older generation graphics cards. Very expensive.
32 GB VRAM 5090 (or whatever is available). Pricing in the US is $3000 + or more. Fastest at inference, but not as much room to hold the larger models.
glhf.chat , Run almost any open source model that's available, and pay for usage. Roughly $0.01 to $0.10 per prompt/answer. I signed up for it as a beta before it was paid and they gave me $10 credit after they started requiring money. In the long run, I don't like this a lot because I want to run an agent or some local coding or local RAG or fine tuning and only pay for electricity.
I took on a larger project recently. I am converting a Corian countertop/sink + island + desk in a kitchen area over to a hardwoood live-edge counter top.
Laminate - A bunch of boards/planks glued side by side in this case
Butcherblock - Typically end-grain up and down on smaller pieces of wood all glued together. In this build, the planks are full thickness of the countertop, and run long grain down the length of the counter top