curl --request POST \ | |
--url https://api.openai.com/v1/chat/completions \ | |
--header 'Accept: */*' \ | |
--header 'Authorization: Bearer <API KEY>' \ | |
--header 'Content-Type: application/json' \ | |
--header 'User-Agent: httpyac' \ | |
--data '{ | |
"model": "gpt-4o", | |
"messages": [ | |
{ |
import openai from "openai"; | |
import { z } from "zod"; | |
import { zodToJsonSchema } from "zod-to-json-schema"; | |
import fs from "fs"; | |
import path from "path"; | |
import intros from "./intros.json"; | |
const config = { |
{ | |
"head": { | |
"prompts": [ | |
{ | |
"raw": "Write one concise paragraph about the company that created you", | |
"label": "{{subject}}", | |
"provider": "meta-llama/llama-3.3-70b-instruct", | |
"metrics": { | |
"score": 5, | |
"testPassCount": 5, |
import torch | |
import numpy as np | |
import matplotlib.pyplot as plt | |
from transformers import AutoModelForCausalLM | |
import gc | |
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") | |
model = AutoModelForCausalLM.from_pretrained("kz919/QwQ-0.5B-Distilled-SFT") | |
fig = plt.figure(figsize=(15, 10)) |
I can help you create a darkroom timer using the M5Stack Cardputer and a relay unit. This will allow you to control an enlarger or other darkroom equipment. Here's a basic implementation:
#include <M5Cardputer.h>
// Pin definitions
const int RELAY_PIN = 38; // Adjust this according to your relay connection
const int DEFAULT_TIME = 10; // Default time in seconds
// Global variables
This community was a great part of my life for the past two years, so as 2024 comes to a close, I wanted to feed my nostalgia a bit. Let me take you back to the most notable things happened here this year.
This isn't a log of model releases or research, rather things that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on of sorts. I hope that it'll also show the amount of progress and development that happend in just a single year and make you even more excited for what's to come in 2025.
The year started with the excitement about Phi-2 (443 upvotes, by u/steph_pop). Phi-2 feels like ancient history these days, it's also fascinating that we end the 2024 with the Phi-4. Just one week after, people discovered that apparently it [was trained on the software engineer's diary](https://reddit.com/r/LocalLLaMA/comments/1
#!/bin/bash | |
# TASK=padbench | |
# TASK=bbh_256_slim | |
TASK=mmlu_256_slim | |
# Common | |
# h bench tasks ./scripts/bench/padbench.yaml | |
h bench tasks ./scripts/bench/$TASK.yaml | |
h config set bench.parallel 4 |
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<meta charset="UTF-8"> | |
<meta name="viewport" content="width=device-width, initial-scale=1.0"> | |
<title>Harbor Bench</title> | |
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script> | |
<style> |
- tags: | |
- bbh | |
question: >- | |
Complete the rest of the sequence, making sure that the parentheses are | |
closed properly. Input: { < { { [ ] } } { < [ { { < > } } [ ( ) ( ) ] [ [ [ | |
[ ( { < ( < ( [ ] ) > ) > } ) ] ] ] ] ] ( ) ( [ ] { } ) > } > [ { ( ( ) ) } | |
] | |
criteria: | |
correctness: 'The answer is }' | |
- tags: |