This is a gist on how to get StreamDiffusion running on a Mac (mps)
git clone https://github.com/cumulo-autumn/StreamDiffusion.git
cd StreamDiffusion
import base64 | |
import functools | |
import operator | |
from typing import * | |
from typing import Annotated, Any, Dict, List, Sequence | |
from dto.chat import * | |
from dto.graph import * | |
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder | |
from langchain.tools import BaseTool |
git clone https://github.com/cumulo-autumn/StreamDiffusion.git
cd StreamDiffusion
XZ Backdoor symbol deobfuscation. Updated as i make progress |
This is a living document. Everything in this document is made in good faith of being accurate, but like I just said; we don't yet know everything about what's going on.
Update: I've disabled comments as of 2025-01-26 to avoid everyone having notifications for something a year on if someone wants to suggest a correction. Folks are free to email to suggest corrections still, of course.
The default ChatGPT fonts, according to OpenAI's brand guidelines, are proprietary and require appropriate font license according to your use case.
They can be purchased here: https://klim.co.nz/buy/soehne/
The fonts in question are (9 total):
- Söhne (Buch Kursiv, Buch, Halbfett Kursiv, Halbfett, Kraftig Kursiv, Kraftig, Mono Buch Kursiv, Mono Buch, Mono Halbfett)
If you have purchased a license, you can use the commented-out @font-face
declarations in ./client/src/styles.css to include them in your project.
struct ContentView: View { | |
var body: some View { | |
VStack { | |
HStack { | |
Spacer() | |
Button { } label: { | |
Image(systemName: "power") | |
.resizable() | |
.aspectRatio(contentMode: .fill) | |
.padding() |
/// <summary> | |
/// Extract Main CPU firmware from ICOM IC-R8600 firmware bundle (1.01-1.35 USA and non-USA versions) | |
/// non-USA versions: | |
/// https://www.icomjapan.com/support/firmware_driver/?product=IC-R8600(EUR)&frm_type=Firmware&old=true | |
/// USA versions: | |
/// https://www.icomjapan.com/support/firmware_driver/?product=IC-R8600&frm_type=Firmware&old=true | |
/// </summary> | |
/// <param name="bundle">Firmware bundle</param> | |
/// <returns>Unpacked data</returns> | |
static byte[] MainCpuFirmwareExtract(byte[] bundle) |
// model: gpt-4-vision-preview | |
const input = 'can you help me land this skateboarding trick?' | |
const frames = [ | |
// Frames should be a list of image URLs or bytes | |
] | |
const messages = [ | |
...messages, |
The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.
There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.
#define _WIN32_WINNT 0x0502 | |
#define WINVER 0x0502 | |
#include <windows.h> | |
#include <errhandlingapi.h> | |
#include <process.h> | |
#include "beacon.h" | |
WINBASEAPI PVOID WINAPI KERNEL32$AddVectoredExceptionHandler (ULONG First, PVECTORED_EXCEPTION_HANDLER Handler); | |
DECLSPEC_IMPORT uintptr_t __cdecl MSVCRT$_beginthreadex(void *_Security,unsigned _StackSize,_beginthreadex_proc_type _StartAddress,void *_ArgList,unsigned _InitFlag,unsigned *_ThrdAddr); | |
DECLSPEC_IMPORT void __cdecl MSVCRT$_endthreadex(unsigned _Retval); |