Skip to content

Instantly share code, notes, and snippets.

@marhan
marhan / release.sh
Created June 1, 2020 19:07 — forked from devster/release.sh
Relase git tag script
#!/bin/bash
# Script to simplify the release flow.
# 1) Fetch the current release version
# 2) Increase the version (major, minor, patch)
# 3) Add a new git tag
# 4) Push the tag
# Parse command line options.
while getopts ":Mmpd" Option
@marhan
marhan / Dockerfile
Created April 12, 2020 20:22
Dockerfile for build of Gradle based project and deployment to Kubernetes by Skaffold
FROM gradle:6.3-jdk11
# Install Docker client
ARG DOCKER_VERSION=19.03.8
RUN curl -fsSL https://download.docker.com/linux/static/stable/`uname -m`/docker-$DOCKER_VERSION.tgz | tar --strip-components=1 -xz -C /usr/local/bin docker/docker
# Install Kubectl
RUN curl -LfsO https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# Markus Bash Prompt, inspired by "Sexy Prompt"
if tput setaf 1 &> /dev/null; then
if [[ $(tput colors) -ge 256 ]] 2>/dev/null; then
MAGENTA=$(tput setaf 9)
ORANGE=$(tput setaf 172)
GREEN=$(tput setaf 190)
PURPLE=$(tput setaf 141)
WHITE=$(tput setaf 0)
else
@marhan
marhan / docker-compose.yml
Created June 6, 2019 19:56
Docker compose file for Apache Zeppelin
version: '3'
services:
zeppelin:
build: ./zeppelin/docker
image: zeppelin:latest
ports:
- 10000:8080
volumes:
@marhan
marhan / read_psql_data.scala
Created June 2, 2019 12:57
Spark Scala read data from POstgreSQL
var account = spark.read
.format("jdbc")
.option("url", "jdbc:postgresql://localhost:5432/zeppelin")
.option("dbtable", "public.account")
.option("user", "zeppelin")
.option("password", "zeppelin")
.option("driver", "org.postgresql.Driver")
.load()
@marhan
marhan / import_psql.scala
Created June 2, 2019 12:55
Spark Scala Import Data into PostgreSQL
account.write
.format("jdbc")
.option("url", "jdbc:postgresql://localhost:5432/zeppelin")
.option("dbtable", "public.account")
.option("user", "zeppelin")
.option("password", "zeppelin")
.option("driver", "org.postgresql.Driver")
.mode("overwrite")
.save
@marhan
marhan / csv_import.scala
Created June 2, 2019 12:54
Spark Scala HASPA CSV Import
val filesPath = "/Volumes/Volume/bank-account-files/*.csv"
import org.apache.spark.sql.types.{StructType, StructField, StringType, DecimalType};
val customSchema = StructType( Array( StructField("Buchung", StringType, true),
StructField("Wert", StringType, true),
StructField("Verwendungszweck", StringType, true),
StructField("Betrag", StringType, true) ))
val df = sqlContext.read.format("com.databricks.spark.csv").
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715915264 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
.DS_Store
.DocumentRevisions-V100
.Spotlight-V100
.TemporaryItems
.Trashes
.com.apple.timemachine.donotpresent
.fseventsd
_gsdata_
@marhan
marhan / backup.sh
Created December 29, 2018 10:26
Shell script for backup data from one volume to another.
#!/bin/bash
rsync -avzi \
--delete --delete-excluded --force-delete --progress \
--exclude-from /Volumes/MyData/backup-excluded.txt \
/Volumes/MyData/ /Volumes/Backup/MyData/