Skip to content

Instantly share code, notes, and snippets.

View Hungsiro506's full-sized avatar

Hưng Vũ Hungsiro506

View GitHub Profile
@Hungsiro506
Hungsiro506 / cursor-active-memory.md
Last active March 28, 2025 04:27
The memory bank workflow for cursor

Cursor's Active Memory

I am Cursor, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Active Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional.

Memory Bank Structure

The Memory Bank consists of required core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy:

flowchart TD
@Hungsiro506
Hungsiro506 / cursor-dev-rule-1.md
Last active March 27, 2025 23:47
rule-for-curosr-1

Fundamental Principles:

  • Always write clean, simple, and modular code with clear, consistent naming.
  • Prioritize simplicity in implementation and
  • aim for files under 200 lines
  • focusing on core functionality before any optimization.
  • Test thoroughly after every meaningful change.
  • Think through the problem and write 2–3 reasoning paragraphs before coding.
  • Use clear, easy-to-understand language and short sentences in both code and comments.
@Hungsiro506
Hungsiro506 / iterative_workflow.md
Last active March 27, 2025 23:01
AI Coding Workflow Generator Prompt

AI Coding Workflow Generator

Project Concept

[ENTER A BRIEF CONCEPT - Just 1-3 sentences about what you want to build]

Technical Context (Optional)

[ANY TECHNICAL CONSTRAINTS OR PREFERENCES - Languages, frameworks, deployment targets, etc.]

Instructions

Given a field named eventType with mapping.

"itemType" : {
  "type" : "text",
  "fields" : {
      "keyword" : {
      "type" : "keyword",
 "ignore_above" : 256
next_xid = 1
active_xids = set()
records = []
def new_transaction():
global next_xid
next_xid += 1
active_xids.add(next_xid)
return Transaction(next_xid)
#
# Adding new kubectl config to the existing ~/.kube/config
#
export KUBECONFIG=~/.kube/config:new_config_file
kubectl config view --flatten >> /tmp/kube_config
rm ~/.kube/config
cp /tmp/kube_config ~/.kube/config
// Purpose:
// - Ensures that a resource is deterministically disposed of once it goes out of scope
// - Use this pattern when working with resources that should be closed or managed after use
//
// The benefit of this pattern is that it frees the developer from the responsibility of
// explicitly managing resources
import scala.io.Source
import java.io._
scala> val df = spark.sqlContext.read.csv("/data/dns/cached_ip/*")
df: org.apache.spark.sql.DataFrame = [_c0: string]
scala> val cached = df
cached: org.apache.spark.sql.DataFrame = [_c0: string]
scala> val npic = spark.sqlContext.read.csv("/data/dns/npic_dns/*")
npic: org.apache.spark.sql.DataFrame = [_c0: string]
scala> val allo = spark.sqlContext.read.csv("/user/hungvd8/internet_user_profile_duration/Allocated-IPs2017-11-21.csv/*")
scala> val dns = spark.sqlContext.read.parquet("/data/dns/dns-extracted-two-hours/2017-11-22-02/out/")
dns: org.apache.spark.sql.DataFrame = [value: string]
scala> val splited = dns.withColumn("temp",split(col("value"),"\\t"))
splited: org.apache.spark.sql.DataFrame = [value: string, temp: array<string>]
scala> val df = splited.select((0 until 25).map(i => col("temp").getItem(i).as(s"col$i")): _*)
df: org.apache.spark.sql.DataFrame = [col0: string, col1: string ... 23 more fields]
scala> val npic = df.where("col24 = '-1'").select("col2")
date -d '1 hour ago' '+%Y-%m-%d'
`Result` : 2017-11-13
date -d '1 hour ago' '+%Y-%m-%d %H'
Result : 2017-11-13 22
date -d '1 hour ago' '+%H'
22
date '+%H'
23