Skip to content

Instantly share code, notes, and snippets.

View ehzawad's full-sized avatar
๐ŸŽƒ
Wasteland Baby!

ehzawad ehzawad

๐ŸŽƒ
Wasteland Baby!
View GitHub Profile
import re
from typing import Any, Text, Dict, List
bn_num_string = [
'เฆถเง‚เฆจเงเฆฏ', 'เฆเฆ•', 'เฆฆเงเฆ‡', 'เฆคเฆฟเฆจ', 'เฆšเฆพเฆฐ', 'เฆชเฆพเฆเฆš', 'เฆ›เฆฏเฆผ', 'เฆธเฆพเฆค', 'เฆ†เฆŸ', 'เฆจเฆฏเฆผ', 'เฆฆเฆถ', 'เฆเฆ—เฆพเฆฐเง‹', 'เฆฌเฆพเฆฐเง‹', \
'เฆคเง‡เฆฐเง‹', 'เฆšเงŒเฆฆเงเฆฆ', 'เฆชเฆจเง‡เฆฐเง‹', 'เฆทเง‹เฆฒเง‹', 'เฆธเฆคเง‡เฆฐเง‹', 'เฆ†เฆ เฆพเฆฐเง‹', 'เฆ‰เฆจเฆฟเฆถ','เฆฌเฆฟเฆถ','เฆเฆ•เงเฆถ', 'เฆฌเฆพเฆ‡เฆถ', 'เฆคเง‡เฆ‡เฆถ', 'เฆšเฆฌเงเฆฌเฆฟเฆถ', 'เฆชเฆเฆšเฆฟเฆถ', 'เฆ›เฆพเฆฌเงเฆฌเฆฟเฆถ', \
'เฆธเฆพเฆคเฆพเฆถ', 'เฆ†เฆ เฆพเฆถ', 'เฆ‰เฆจเง‹เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆเฆ•เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆฌเฆคเฆฟเฆฐเฆฟเฆถ', 'เฆคเง‡เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆšเงŒเฆคเฆฟเฆฐเฆฟเฆถ', 'เฆชเฆเฆฏเฆผเฆคเฆฟเฆฐเฆฟเฆถ', 'เฆ›เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆธเฆพเฆเฆ‡เฆคเฆฟเฆฐเฆฟเฆถ', 'เฆ†เฆŸเฆคเฆฟเฆฐเฆฟเฆถ', 'เฆ‰เฆจเง‹เฆšเฆฒเงเฆฒเฆฟเฆถ',\
'เฆšเฆฒเงเฆฒเฆฟเฆถ','เฆเฆ•เฆšเฆฒเงเฆฒเฆฟเฆถ', 'เฆฌเฆฟเฆฏเฆผเฆพเฆฒเงเฆฒเฆฟเฆถ', 'เฆคเง‡เฆคเฆพเฆฒเงเฆฒเฆฟเฆถ', 'เฆšเงเฆฏเฆผเฆพเฆฒเงเฆฒเฆฟเฆถ', 'เฆชเฆเฆฏเฆผเฆคเฆพเฆฒเงเฆฒเฆฟเฆถ', 'เฆ›เง‡เฆšเฆฒเงเฆฒเฆฟเฆถ', 'เฆธเฆพเฆคเฆšเฆฒเงเฆฒเฆฟเฆถ', 'เฆ†เฆŸเฆšเฆฒเงเฆฒเฆฟเฆถ', 'เฆ‰เฆจเง‹เฆชเฆžเงเฆšเฆพเฆถ', 'เฆชเฆžเงเฆšเฆพเฆถ', 'เฆเฆ•เฆพเฆจเงเฆจ',\
'เฆฌเฆพเฆนเฆพเฆจเงเฆจ', 'เฆคเฆฟเฆชเงเฆชเฆพเฆจเงเฆจ', 'เฆšเงเฆฏเฆผเฆพเฆจเงเฆจ', 'เฆชเฆžเงเฆšเฆพเฆจเงเฆจ', 'เฆ›เฆพเฆชเงเฆชเฆพเฆจเงเฆจ', 'เฆธเฆพเฆคเฆพเฆจเงเฆจ', 'เฆ†เฆŸเฆพเฆจเงเฆจ', 'เฆ‰เฆจเง‹เฆทเฆพเฆŸ','เฆทเฆพเฆŸ','เฆเฆ•เฆทเฆŸเงเฆŸเฆฟ', 'เฆฌเฆพเฆทเฆŸเงเฆŸเฆฟ', 'เฆคเง‡เฆทเฆŸเงเฆŸเฆฟ', 'เฆšเงŒเฆทเฆŸเงเฆŸเฆฟ', 'เฆชเฆเฆฏเฆผเฆทเฆŸเงเฆŸเฆฟ', \

---- resume-to-JSON prompt ----

parser_prompt = f""" You are an intelligent resume parser for the HR department. Read the raw CV text delimited by <<< and >>>, then return a single valid JSON objectโ€”no markdown, no commentary.

<<< {text}

Schema to follow exactly:

@ehzawad
ehzawad / lecun.py
Created June 14, 2025 19:33
LeCun neural network implementation
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import random_split, DataLoader
from multiprocessing import freeze_support
def main():
#!/bin/bash
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
BOLD='\033[1m'
NC='\033[0m'
get_size() {

/Users/ehza/github-repos/codex/codex-cli/src % codex โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ— OpenAI Codex (research preview) v0.1.2505161800 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ localhost session: 418d11041cca40b98b5a1af28f8ac734 โ”‚ โ”‚ โ†ณ workdir: ~/github-repos/codex/codex-cli/src โ”‚ โ”‚ โ†ณ model: codex-mini-latest โ”‚ โ”‚ โ†ณ provider: openai โ”‚

@ehzawad
ehzawad / php_to_node.md
Created April 28, 2025 06:22
php to node

Understood. I will dig into GitHub repositories specifically, focusing on open-source tools, frameworks, and agentic workflows that assist or enable migrating, refactoring, or transpiling full PHP codebases into Node.js (JavaScript/TypeScript). I will prioritize actively maintained projects, note their adaptability, and include any AST parser, LLM-driven agent frameworks, or hybrid systems that are directly relevant. I'll synthesize the findings into a detailed, no-nonsense report with practical guidance on applicability and readiness levels.

Open-Source Tools and Frameworks for PHPโ†’Node.js Migration

Migrating a large PHP codebase to Node.js is a complex task. No turnkey solution exists, but several open-source projects and frameworks can help automate pieces of the process. We categorize these into AST parsers and analyzers, code transpilers and generators, LLM-assisted migration tools, and agentic framework kits. For each, we note capabilities, limitations, and adaptation effort, focu

#!/usr/bin/env python3
import asyncio
import aiohttp
import json
import time
import argparse
import os
import sys
import logging
from typing import Dict, Any, Optional, List, Text
#!/usr/bin/env python3
"""
Bengali Emotion Dataset Creator
This script creates a Hugging Face dataset from Bengali WAV files and their transcriptions.
It processes audio files, maps them with transcripts from a CSV file, and uploads
the dataset to the Hugging Face Hub.
Requirements:
- pandas
-- [email protected]; email me to say hi or if there are any questions
vim.g.mapleader = ' '
vim.g.maplocalleader = ' '
-- Install package manager
local lazypath = vim.fn.stdpath 'data' .. '/lazy/lazy.nvim'
if not vim.loop.fs_stat(lazypath) then
vim.fn.system {
'git', 'clone',
'--filter=blob:none',