Skip to content

Instantly share code, notes, and snippets.

View Foxhound401's full-sized avatar
πŸ€–

Phuc Phung Foxhound401

πŸ€–
View GitHub Profile

Problem Definition

There are many services running on EKS's nodegroup with the spot type, and when a large number of pods have been scheduled to the same couple of spot instances. There is a risk that these instances are retaken by AWS. When this happens the following scenario likely to happen

Pods that reside in the retaken instances don't have enough time to react to instance termination and cause hiccups, latency, or minor downtime until they have been rescheduled to other nodes.

There have been a couple of measurements applied to counter this:

From the Kubernetes templates standpoint:

Steps to migrate postgres to new version

I'm trying to migrate postgres from version 11 to version 14 First have a look at this documentation

Logical Replication and MIgration Strategies

Logical replication continously streams the replication line-by-line, cincluding any changes being written to the database during the migration, until the replication is stopped.

Backup your data

Docker failed to start after update/upgrade to newer version

Either I did a fresh installation of docker or updated the current version to a newer ones

Then started the service sudo systemctl start docker.service

Get the error:

@Foxhound401
Foxhound401 / semantic-commit-messages.md
Created December 13, 2021 11:14 β€” forked from joshbuchea/semantic-commit-messages.md
Semantic Commit Messages

Semantic Commit Messages

See how a minor change to your commit message style can make you a better programmer.

Format: <type>(<scope>): <subject>

<scope> is optional

Example

@Foxhound401
Foxhound401 / gist:33431b88c62723304e6395bf58b42c9b
Created October 25, 2021 02:43 β€” forked from jwebcat/gist:5122366
Properly download from github using wget and curl
wget --no-check-certificate --content-disposition https://github.com/joyent/node/tarball/v0.7.1
# --no-check-cerftificate was necessary for me to have wget not puke about https
curl -LJO https://github.com/joyent/node/tarball/v0.7.1
@Foxhound401
Foxhound401 / .env
Created August 9, 2021 18:18 β€” forked from prappo/.env
env file for Laravel postgresql database
APP_ENV=local
APP_DEBUG=true
APP_KEY=base64:hyHUpukUUigKeEsxpGeTW4UZ+Lg+WAWxcc4/BjlgNtE=
APP_URL=http://localhost
DB_CONNECTION=pgsql
DB_HOST=hostname
DB_PORT=5432
DB_DATABASE=Your database name
DB_USERNAME=Your database username
@Foxhound401
Foxhound401 / self-signed-certificate-with-custom-ca.md
Created August 4, 2021 04:21 β€” forked from fntlnz/self-signed-certificate-with-custom-ca.md
Self Signed Certificate with Custom Root CA

Create Root CA (Done once)

Create Root Key

Attention: this is the key used to sign the certificate requests, anyone holding this can sign certificates on your behalf. So keep it in a safe place!

openssl genrsa -des3 -out rootCA.key 4096
@Foxhound401
Foxhound401 / clone repo with different email account
Last active September 1, 2021 11:25
Clone repo in bitbucket with multiple different email account
#!/bin/bash
####
# Helper to working with multiple Bitbucket accounts
#
# How to use
# change the YOUR_SSH_KEY to your your ssh-key name
#
# chmod +x gitdf.sh
# ./gitdf.sh [email protected]:eastplayer/your-repo.git
@Foxhound401
Foxhound401 / post-mortem.md
Created December 18, 2020 08:51 β€” forked from joewiz/post-mortem.md
Recovery from nginx "Too many open files" error on Amazon AWS Linux

On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:

2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files)

2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...

An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.

  1. * Instead of using su to run ulimit on the nginx account, use ps aux | grep nginx to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits (where pid is the process id retrieved from ps). (Note: sudo may be necessary on your system for the cat command here, depending on your system.)
  2. Added fs.file-max = 70000 to /etc/sysctl.conf
@Foxhound401
Foxhound401 / mongodb_cheat_sheet.md
Created November 8, 2020 13:18 β€” forked from bradtraversy/mongodb_cheat_sheet.md
MongoDB Cheat Sheet

MongoDB Cheat Sheet

Show All Databases

show dbs

Show Current Database