Skip to content

Instantly share code, notes, and snippets.

@delagoya
Last active March 24, 2025 15:41
Show Gist options
  • Save delagoya/1fe8becf2ced68dc615e82abec2e4be1 to your computer and use it in GitHub Desktop.
Save delagoya/1fe8becf2ced68dc615e82abec2e4be1 to your computer and use it in GitHub Desktop.
Stand up an EC2 instances for Nextflow pipeline development
Description: Create a VS code-server instance with an Amazon CloudFront distribution for development of Nextflow pipelines. Version 4.0.0
Parameters:
VSCodeUser:
Type: String
Description: UserName for VS code-server
Default: developer
GitUserName:
Type: String
Description: Global Git user.name for repo commits
Default: "Anonymous"
GitUserEmail:
Type: String
Description: Global Git user.email for repo commits
Default: [email protected]
TowerAccessToken:
Type: String
Description: Tower access token for using Wave and Seqera Containers
NoEcho: true
InstanceName:
Type: String
Description: VS code-server EC2 instance name
Default: VSCodeServer
InstanceVolumeSize:
Type: Number
Description: VS code-server EC2 instance volume size in GB
Default: 100
InstanceType:
Description: VS code-server EC2 instance type. See https://aws.amazon.com/ec2/instance-types/ for possible values. NOTE - GPU drivers and software are not preinstalled. If you choose an accellerated instance, you will have to configure as needed post launch.
Type: String
Default: c7g.2xlarge
ConstraintDescription: Must be a valid C or M series, 7th or 8th generation EC2 instance type.
InstanceOperatingSystem:
Description: VS code-server EC2 operating system
Type: String
Default: AmazonLinux-2023
AllowedValues: ['AmazonLinux-2023', 'Ubuntu-22', 'Ubuntu-24']
HomeFolder:
Type: String
Description: Folder to open in VS Code server
Default: /workdir
DevServerBasePath:
Type: String
Description: Base path for the application to be added to Nginx sites-available list
Default: app
DevServerPort:
Type: Number
Description: Port for the DevServer
Default: 8081
RepoUrl:
Description: Remote repo URL to clone. To not clone a remote repo, leave blank
Type: String
Default: 'https://github.com/nf-core/rnaseq.git'
Conditions:
IsAL2023: !Equals [!Ref InstanceOperatingSystem, 'AmazonLinux-2023']
IsGraviton: !Or
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'c7g']
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'c7gd']
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'c8g']
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'm7g']
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'm7gd']
- !Equals [ !Select [ 0, !Split ['.', !Ref InstanceType ]], 'm8g']
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: Instance Configuration
Parameters:
- InstanceName
- InstanceVolumeSize
- InstanceType
- InstanceOperatingSystem
- Label:
default: Code Server Configuration
Parameters:
- VSCodeUser
- GitUserName
- GitUserEmail
- TowerAccessToken
- HomeFolder
- DevServerBasePath
- DevServerPort
- RepoUrl
ParameterLabels:
VSCodeUser:
default: VS code-server user name
GitUserName:
default: Git user.name
GitUserEmail:
default: Git user.email
TowerAccessToken:
default: TOWER_ACCESS_TOKEN
InstanceName:
default: Instance name
InstanceVolumeSize:
default: Instance volume size
InstanceType:
default: Instance type
InstanceOperatingSystem:
default: Instance operating system
HomeFolder:
default: VS code-server home folder
DevServerBasePath:
default: Application base path
DevServerPort:
default: Application port
RepoUrl:
default: Git repo URL
Mappings:
ArmImage:
Ubuntu-22:
ImageId: '{{resolve:ssm:/aws/service/canonical/ubuntu/server/jammy/stable/current/arm64/hvm/ebs-gp2/ami-id}}'
Ubuntu-24:
ImageId: '{{resolve:ssm:/aws/service/canonical/ubuntu/server/noble/stable/current/arm64/hvm/ebs-gp3/ami-id}}'
AmazonLinux-2023:
ImageId: '{{resolve:ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64}}'
AmdImage:
Ubuntu-22:
ImageId: '{{resolve:ssm:/aws/service/canonical/ubuntu/server/jammy/stable/current/amd64/hvm/ebs-gp2/ami-id}}'
Ubuntu-24:
ImageId: '{{resolve:ssm:/aws/service/canonical/ubuntu/server/noble/stable/current/amd64/hvm/ebs-gp3/ami-id}}'
AmazonLinux-2023:
ImageId: '{{resolve:ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64}}'
AWSRegionsPrefixListID:
# aws ec2 describe-managed-prefix-lists --region <REGION> | jq -r '.PrefixLists[] | select (.PrefixListName == "com.amazonaws.global.cloudfront.origin-facing") | .PrefixListId'
ap-northeast-1:
PrefixList: pl-58a04531
ap-northeast-2:
PrefixList: pl-22a6434b
ap-south-1:
PrefixList: pl-9aa247f3
ap-southeast-1:
PrefixList: pl-31a34658
ap-southeast-2:
PrefixList: pl-b8a742d1
ca-central-1:
PrefixList: pl-38a64351
eu-central-1:
PrefixList: pl-a3a144ca
eu-north-1:
PrefixList: pl-fab65393
eu-west-1:
PrefixList: pl-4fa04526
eu-west-2:
PrefixList: pl-93a247fa
eu-west-3:
PrefixList: pl-75b1541c
sa-east-1:
PrefixList: pl-5da64334
us-east-1:
PrefixList: pl-3b927c52
us-east-2:
PrefixList: pl-b6a144df
us-west-1:
PrefixList: pl-4ea04527
us-west-2:
PrefixList: pl-82a045eb
Resources:
VSCodeSecret:
Metadata:
cfn_nag:
rules_to_suppress:
- id: W77
reason: The default KMS Key used by Secrets Manager is appropriate for this password which will be used to log into VSCodeServer, which has very limited permissions. In addition this secret will not be required to be shared across accounts
Type: AWS::SecretsManager::Secret
DeletionPolicy: Delete
UpdateReplacePolicy: Delete
Properties:
Name: !Sub
- ${InstanceName}-${RandomGUID}
- RandomGUID: !Select [0, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId ]]]]
Description: VS code-server user details
GenerateSecretString:
PasswordLength: 16
SecretStringTemplate: !Sub '{"username":"${VSCodeUser}"}'
GenerateStringKey: 'password'
ExcludePunctuation: true
SecretPlaintextLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: !Sub lambda.${AWS::URLSuffix}
Action: sts:AssumeRole
ManagedPolicyArns:
- !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: AwsSecretsManager
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- secretsmanager:GetSecretValue
Resource: !Ref VSCodeSecret
SecretPlaintextLambda:
Type: AWS::Lambda::Function
Metadata:
cfn_nag:
rules_to_suppress:
- id: W58
reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html
- id: W89
reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity
- id: W92
reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity
Properties:
Description: Return the value of the secret
Handler: index.lambda_handler
Runtime: python3.12
MemorySize: 128
Timeout: 10
Architectures:
- arm64
Role: !GetAtt SecretPlaintextLambdaRole.Arn
Code:
ZipFile: |
import boto3
import json
import cfnresponse
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def is_valid_json(json_string):
logger.debug(f'Calling is_valid_jason:{json_string}')
try:
json.loads(json_string)
logger.info('Secret is in json format')
return True
except json.JSONDecodeError:
logger.info('Secret is in string format')
return False
def lambda_handler(event, context):
logger.debug(f'event: {event}')
logger.debug(f'context: {context}')
try:
if event['RequestType'] == 'Delete':
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take')
else:
resource_properties = event['ResourceProperties']
secret_name = resource_properties['SecretArn']
secrets_mgr = boto3.client('secretsmanager')
logger.info('Getting secret from %s', secret_name)
secret = secrets_mgr.get_secret_value(SecretId = secret_name)
logger.debug(f'secret: {secret}')
secret_value = secret['SecretString']
responseData = {}
if is_valid_json(secret_value):
responseData = secret_value
else:
responseData = {'secret': secret_value}
logger.debug(f'responseData: {responseData}')
logger.debug(f'type(responseData): {type(responseData)}')
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData=json.loads(responseData), reason='OK', noEcho=True)
except Exception as e:
logger.error(e)
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e))
SecretPlaintext:
Type: Custom::SecretPlaintextLambda
Properties:
ServiceToken: !GetAtt SecretPlaintextLambda.Arn
ServiceTimeout: 15
SecretArn: !Ref VSCodeSecret
VSCodeSSMDoc:
Type: AWS::SSM::Document
Properties:
DocumentType: Command
Content:
schemaVersion: '2.2'
description: Bootstrap VS code-server instance
parameters:
LinuxFlavor:
type: String
default: 'al2023'
VSCodePassword:
type: String
default: !Ref AWS::StackId
# all mainSteps scripts are in in /var/lib/amazon/ssm/<instanceid>/document/orchestration/<uuid>/<StepName>/_script.sh
mainSteps:
- name: Sleep30s1
action: aws:runShellScript
inputs:
timeoutSeconds: 60
runCommand:
- '#!/bin/bash'
- 'sleep 30'
- name: InstallCloudWatchAgent
action: aws:configurePackage
inputs:
name: AmazonCloudWatchAgent
action: Install
- name: ConfigureCloudWatchAgent
action: aws:runDocument
inputs:
documentType: SSMDocument
documentPath: AmazonCloudWatch-ManageAgent
documentParameters:
action: configure
mode: ec2
optionalConfigurationSource: default
optionalRestart: 'yes'
- name: Sleep30s2
action: aws:runShellScript
inputs:
timeoutSeconds: 60
runCommand:
- '#!/bin/bash'
- 'sleep 30'
- name: InstallAptPackagesApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q apt-utils
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q needrestart unattended-upgrades
- sed -i 's/#$nrconf{kernelhints} = -1;/$nrconf{kernelhints} = 0;/' /etc/needrestart/needrestart.conf
- sed -i 's/#$nrconf{verbosity} = 2;/$nrconf{verbosity} = 0;/' /etc/needrestart/needrestart.conf
- sed -i "s/#\$nrconf{restart} = 'i';/\$nrconf{restart} = 'a';/" /etc/needrestart/needrestart.conf
- echo "Apt helper packages added. Checking configuration"
- cat /etc/needrestart/needrestart.conf
- name: InstallBasePackagesDnf
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- al2023
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dnf install -y --allowerasing curl gnupg whois argon2 unzip nginx openssl
- name: InstallBasePackagesApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q curl gnupg whois argon2 unzip nginx openssl locales locales-all apt-transport-https ca-certificates software-properties-common
- name: AddUserDnf
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- al2023
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- !Sub |
echo 'Adding user: ${VSCodeUser}'
adduser -c '' ${VSCodeUser}
passwd -l ${VSCodeUser}
echo "${VSCodeUser}:{{ VSCodePassword }}" | chpasswd
usermod -aG wheel ${VSCodeUser}
echo '%wheel ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/90-cloud-init-users
- !Sub mkdir -p /home/${VSCodeUser}/.local/bin && chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser} - echo "User added. Checking configuration"
- !Sub getent passwd ${VSCodeUser}
- name: AddUserApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- !Sub |
if [[ "${VSCodeUser}" == "ubuntu" ]]
then
echo 'Using existing user: ${VSCodeUser}'
else
echo 'Adding user: ${VSCodeUser}'
adduser --disabled-password --gecos '' ${VSCodeUser}
echo "${VSCodeUser}:{{ VSCodePassword }}" | chpasswd
usermod -aG sudo ${VSCodeUser}
fi
- !Sub |
tee /etc/sudoers.d/91-vscode-user <<EOF
${VSCodeUser} ALL=(ALL) NOPASSWD:ALL
EOF
- !Sub mkdir -p /home/${VSCodeUser} && chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- !Sub mkdir -p /home/${VSCodeUser}/.local/bin && chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- echo "User added. Checking configuration"
- !Sub getent passwd ${VSCodeUser}
- name: UpdateProfile
action: aws:runShellScript
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- echo LANG=en_US.utf-8 >> /etc/environment
- echo LC_ALL=en_US.UTF-8 >> /etc/environment
- !Sub echo 'PATH=$PATH:/home/${VSCodeUser}/.local/bin' >> /home/${VSCodeUser}/.bashrc
- !Sub echo 'export PATH' >> /home/${VSCodeUser}/.bashrc
- !Sub echo 'export AWS_REGION=${AWS::Region}' >> /home/${VSCodeUser}/.bashrc
- !Sub echo 'export AWS_ACCOUNTID=${AWS::AccountId}' >> /home/${VSCodeUser}/.bashrc
- !Sub echo 'export NEXT_TELEMETRY_DISABLED=1' >> /home/${VSCodeUser}/.bashrc
- !Sub echo "export PS1='\[\033[01;32m\]\u:\[\033[01;34m\]\w\[\033[00m\]\$ '" >> /home/${VSCodeUser}/.bashrc
- !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- name: InstallAWSCLI
action: aws:runShellScript
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- mkdir -p /tmp
- curl -fsSL https://awscli.amazonaws.com/awscli-exe-linux-$(uname -m).zip -o /tmp/aws-cli.zip
- !Sub chown -R ${VSCodeUser}:${VSCodeUser} /tmp/aws-cli.zip
- unzip -q -d /tmp /tmp/aws-cli.zip
- sudo /tmp/aws/install
- rm -rf /tmp/aws
- echo "AWS CLI installed. Checking configuration"
- aws --version
- name: InstallGitDnf
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- al2023
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dnf install -y git
- !Sub sudo -u ${VSCodeUser} git config --global user.email '${GitUserEmail}'
- !Sub sudo -u ${VSCodeUser} git config --global user.name '${GitUserName}'
- !Sub sudo -u ${VSCodeUser} git config --global init.defaultBranch "main"
- echo "Git installed. Checking configuration"
- git --version
- !Sub |
if [[ "${RepoUrl}" =~ ^codecommit ]];
then
dnf install -y python3-pip
echo "Installing git-remote-codecommit with pip3"
PIP_BREAK_SYSTEM_PACKAGES=1 pip3 install git-remote-codecommit
fi
- name: InstallGitApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- add-apt-repository ppa:git-core/ppa
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q git
- !Sub sudo -u ${VSCodeUser} git config --global user.email '${GitUserEmail}'
- !Sub sudo -u ${VSCodeUser} git config --global user.name '${GitUserName}'
- !Sub sudo -u ${VSCodeUser} git config --global init.defaultBranch "main"
- echo "Git installed. Checking configuration"
- git --version
- !Sub |
if [[ "${RepoUrl}" =~ ^codecommit ]];
then
apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q python3-pip
echo "Installing git-remote-codecommit with pip3"
PIP_BREAK_SYSTEM_PACKAGES=1 pip3 install git-remote-codecommit
fi
- name: InstallDockerApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q apt-transport-https ca-certificates curl gnupg-agent software-properties-common
- install -m 0755 -d /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
- chmod a+r /etc/apt/keyrings/docker.asc
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- service docker start
- !Sub usermod -aG docker ${VSCodeUser}
- newgrp docker
- echo "Docker installed. Checking configuration"
- !Sub sudo -u ${VSCodeUser} docker run --rm hello-world
- name: InstallDockerDnf
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- al2023
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dnf install -y dnf-plugins-core
- dnf install -y docker amazon-ecr-credential-helper
- systemctl start docker
- systemctl enable docker
- !Sub usermod -aG docker ${VSCodeUser}
- newgrp docker
- echo "Docker installed. Checking configuration"
- !Sub sudo -u ${VSCodeUser} docker run --rm hello-world
- name: InstallDeveloperToolsDnf
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- al2023
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dnf groupinstall -y "Development Tools"
- echo "Developer tools installed. Checking configuration"
- !Sub rpm -q --whatprovides /usr/bin/gcc
- name: InstallDeveloperToolsApt
action: aws:runShellScript
precondition:
StringEquals:
- '{{ LinuxFlavor }}'
- ubuntu
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- dpkg --configure -a
- apt-get -q update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q build-essential
- echo "Developer tools installed. Checking configuration"
- !Sub dpkg -s build-essential
- name: InstallMiniConda
action: aws:runShellScript
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- mkdir -p /tmp/miniconda
- echo "IsGraviton=${IsGraviton}."
- !If
- IsGraviton
- curl -fsSL https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -o /tmp/miniconda/miniconda-install.sh
- curl -fsSL https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o /tmp/miniconda/miniconda-install.sh
- !Sub bash /tmp/miniconda/miniconda-install.sh -b -p /home/${VSCodeUser}/.miniconda
- !Sub echo 'export PATH=/home/${VSCodeUser}/.miniconda/bin:$PATH' >> /home/${VSCodeUser}/.bashrc
- !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- rm -rf /tmp/miniconda
- echo "Miniconda installed. Checking configuration"
- !Sub sudo -Hiu ${VSCodeUser} /home/${VSCodeUser}/.miniconda/bin/conda --version
- name: InstallNfCoreTools
action: aws:runShellScript
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- !Sub |
sudo -Hiu ${VSCodeUser} /bin/bash << END
# source /home/${VSCodeUser}/.bashrc
/home/${VSCodeUser}/.miniconda/bin/conda config --add channels bioconda
/home/${VSCodeUser}/.miniconda/bin/conda config --add channels conda-forge
/home/${VSCodeUser}/.miniconda/bin/conda config --set channel_priority strict
/home/${VSCodeUser}/.miniconda/bin/conda install -y "openjdk>=21"
/home/${VSCodeUser}/.miniconda/bin/conda install -y nextflow nf-core
END
- name: SetTowerAccessToken
action: aws:runShellScript
inputs:
timeoutSeconds: 300
runCommand:
- '#!/bin/bash'
- !Sub |
if [[ -n "${TowerAccessToken}" ]]
then
sudo -Hiu ${VSCodeUser} bash -c 'echo "export TOWER_ACCESS_TOKEN=\"${TowerAccessToken}\"" >> /home/${VSCodeUser}/.bashrc'
fi
- name: CloneRepo
action: aws:runShellScript
inputs:
timeoutSeconds: 600
runCommand:
- '#!/bin/bash'
- !Sub |
if [[ -z "${RepoUrl}" ]]
then
echo "No Repo"
else
mkdir -p ${HomeFolder} && chown -R ${VSCodeUser}:${VSCodeUser} ${HomeFolder}
cd ${HomeFolder}
RepoUrlRegion=$(echo "${RepoUrl}" | sed "s/{.Region}/${AWS::Region}/g")
sudo -u ${VSCodeUser} git clone $RepoUrlRegion
echo "Repo $(RepoUrlRegion) cloned. Checking configuration"
ls -la ${HomeFolder}
fi
- name: ConfigureCodeServer
action: aws:runShellScript
inputs:
timeoutSeconds: 600
runCommand:
- '#!/bin/bash'
- !Sub export HOME=/home/${VSCodeUser}
- curl -fsSL https://code-server.dev/install.sh | bash -s -- 2>&1
- !Sub systemctl enable --now code-server@${VSCodeUser} 2>&1
- !Sub |
tee /etc/nginx/conf.d/code-server.conf <<EOF
server {
listen 80;
listen [::]:80;
# server_name \$\{CloudFrontDistribution.DomainName\};
server_name *.cloudfront.net;
location / {
proxy_pass http://localhost:8080/;
proxy_set_header Host \$host;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
location /${DevServerBasePath} {
proxy_pass http://localhost:${DevServerPort}/${DevServerBasePath};
proxy_set_header Host \$host;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
}
EOF
- !Sub mkdir -p /home/${VSCodeUser}/.config/code-server
- !Sub |
tee /home/${VSCodeUser}/.config/code-server/config.yaml <<EOF
cert: false
auth: password
hashed-password: "$(echo -n {{ VSCodePassword }} | argon2 $(openssl rand -base64 12) -e)"
EOF
- !Sub mkdir -p /home/${VSCodeUser}/.local/share/code-server/User/
- !Sub touch /home/${VSCodeUser}/.hushlogin
- !Sub mkdir -p ${HomeFolder} && chown -R ${VSCodeUser}:${VSCodeUser} ${HomeFolder}
- !Sub |
tee /home/${VSCodeUser}/.local/share/code-server/User/settings.json <<EOF
{
"extensions.autoUpdate": false,
"extensions.autoCheckUpdates": false,
"telemetry.telemetryLevel": "off",
"security.workspace.trust.startupPrompt": "never",
"security.workspace.trust.enabled": false,
"security.workspace.trust.banner": "never",
"security.workspace.trust.emptyWindow": false,
"auto-run-command.rules": [
{
"command": "workbench.action.terminal.new"
}
]
}
EOF
- !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- !Sub systemctl restart code-server@${VSCodeUser}
- systemctl restart nginx
- !Sub sudo -u ${VSCodeUser} --login code-server --install-extension AmazonWebServices.aws-toolkit-vscode --force
# - !Sub sudo -u ${VSCodeUser} --login code-server --install-extension AmazonWebServices.amazon-q-vscode --force
- !Sub sudo -u ${VSCodeUser} --login code-server --install-extension ms-vscode.live-server --force
- !Sub sudo -u ${VSCodeUser} --login code-server --install-extension synedra.auto-run-command --force
- !Sub sudo -u ${VSCodeUser} --login code-server --install-extension nextflow.nextflow --force
- !Sub sudo -u ${VSCodeUser} --login code-server --install-extension nf-core.nf-core-extensionpack --force
- !Sub chown -R ${VSCodeUser}:${VSCodeUser} /home/${VSCodeUser}
- echo "Nginx installed. Checking configuration"
- nginx -t 2>&1
- systemctl status nginx
- echo "CodeServer installed. Checking configuration"
- code-server -v
- !Sub systemctl status code-server@${VSCodeUser}
SSMDocLambdaRole:
Type: AWS::IAM::Role
Metadata:
cfn_nag:
rules_to_suppress:
- id: W11
reason: The Amazon EC2 ssm:*CommandInvocation API actions do not support resource-level permissions, so you cannot control which individual resources users can view in the console. Therefore, the * wildcard is necessary in the Resource element. See https://docs.aws.amazon.com/service-authorization/latest/reference/list_awssystemsmanager.html
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: !Sub lambda.${AWS::URLSuffix}
Action: sts:AssumeRole
ManagedPolicyArns:
- !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: SSMDocOnEC2
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ssm:SendCommand
Resource:
- !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:document/${VSCodeSSMDoc}
- !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:document/AmazonCloudWatch-ManageAgent
- !Sub arn:${AWS::Partition}:ec2:${AWS::Region}:${AWS::AccountId}:instance/${VSCodeInstance}
- Effect: Allow
Action:
- ssm:ListCommandInvocations
- ssm:GetCommandInvocation
Resource: '*'
RunSSMDocLambda:
Type: AWS::Lambda::Function
Metadata:
cfn_nag:
rules_to_suppress:
- id: W58
reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html
- id: W89
reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity
- id: W92
reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity
Properties:
Description: Run SSM document on EC2 instance
Handler: index.lambda_handler
Runtime: python3.12
MemorySize: 128
Timeout: 600
Environment:
Variables:
RetrySleep: 2900
AbortTimeRemaining: 3200
Architectures:
- arm64
Role: !GetAtt SSMDocLambdaRole.Arn
Code:
ZipFile: |
import boto3
import cfnresponse
import logging
import time
import os
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.debug(f'event: {event}')
logger.debug(f'context: {context}')
if event['RequestType'] != 'Create':
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take')
else:
sleep_ms = int(os.environ.get('RetrySleep'))
abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining'))
resource_properties = event['ResourceProperties']
instance_id = resource_properties['InstanceId']
document_name = resource_properties['DocumentName']
cloudwatch_log_group_name = resource_properties['CloudWatchLogGroupName']
logger.info(f'Running SSM Document {document_name} on EC2 instance {instance_id}. Logging to {cloudwatch_log_group_name}')
del resource_properties['ServiceToken']
if 'ServiceTimeout' in resource_properties:
del resource_properties['ServiceTimeout']
del resource_properties['InstanceId']
del resource_properties['DocumentName']
del resource_properties['CloudWatchLogGroupName']
if 'PhysicalResourceId' in resource_properties:
del resource_properties['PhysicalResourceId']
logger.debug(f'resource_properties filtered: {resource_properties}')
parameters = {}
for key, value in resource_properties.items():
parameters[key] = [value]
logger.debug(f'parameters: {parameters}')
retry = True
attempt_no = 0
time_remaining_ms = context.get_remaining_time_in_millis()
ssm = boto3.client('ssm')
while (retry == True):
attempt_no += 1
logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s')
try:
response = ssm.send_command(
InstanceIds = [instance_id],
DocumentName = document_name,
CloudWatchOutputConfig = {'CloudWatchLogGroupName': cloudwatch_log_group_name, 'CloudWatchOutputEnabled': True},
Parameters = parameters
)
logger.debug(f'response: {response}')
command_id = response['Command']['CommandId']
responseData = {'CommandId': command_id}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, reason='OK')
retry = False
except ssm.exceptions.InvalidInstanceId as e:
time_remaining_ms = context.get_remaining_time_in_millis()
if (time_remaining_ms > abort_time_remaining_ms):
logger.info(f'Instance {instance_id} not ready. Sleeping: {sleep_ms/1000}s')
time.sleep(sleep_ms/1000)
retry = True
else:
logger.info(f'Instance {instance_id} not ready, timed out. Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s')
logger.error(e, exc_info=True)
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='Timed out. Time remaining: ' + str(time_remaining_ms/1000) + 's < Abort time remaining: ' + str(abort_time_remaining_ms/1000) + 's')
retry = False
except Exception as e:
logger.error(e, exc_info=True)
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e))
retry = False
RunVSCodeSSMDoc:
Type: Custom::RunSSMDocLambda
Properties:
ServiceToken: !GetAtt RunSSMDocLambda.Arn
ServiceTimeout: 305
InstanceId: !Ref VSCodeInstance
DocumentName: !Ref VSCodeSSMDoc
CloudWatchLogGroupName: !Sub /aws/ssm/${VSCodeSSMDoc}
VSCodePassword: !GetAtt SecretPlaintext.password
LinuxFlavor: !If [IsAL2023, 'al2023', 'ubuntu']
VSCodeInstanceBootstrapRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- !Sub ec2.${AWS::URLSuffix}
- !Sub ssm.${AWS::URLSuffix}
Action: sts:AssumeRole
ManagedPolicyArns:
- !Sub arn:${AWS::Partition}:iam::aws:policy/AmazonSSMManagedInstanceCore
- !Sub arn:${AWS::Partition}:iam::aws:policy/CloudWatchAgentServerPolicy
- !Sub arn:${AWS::Partition}:iam::aws:policy/AmazonQDeveloperAccess
- !Sub arn:${AWS::Partition}:iam::aws:policy/ReadOnlyAccess
- !Sub arn:${AWS::Partition}:iam::aws:policy/AWSCodeCommitPowerUser
VSCodeInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref VSCodeInstanceBootstrapRole
VSCodeInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !If
- IsGraviton
- !FindInMap [ArmImage, !Ref InstanceOperatingSystem, ImageId]
- !FindInMap [AmdImage, !Ref InstanceOperatingSystem, ImageId]
InstanceType: !Ref InstanceType
BlockDeviceMappings:
- DeviceName: !If [IsAL2023, /dev/xvda, /dev/sda1]
Ebs:
VolumeSize: !Ref InstanceVolumeSize
VolumeType: gp3
DeleteOnTermination: true
Encrypted: true
Monitoring: true
SecurityGroupIds:
- !Ref SecurityGroup
IamInstanceProfile: !Ref VSCodeInstanceProfile
UserData:
Fn::Base64: !Sub |
#cloud-config
hostname: ${InstanceName}
runcmd:
- mkdir -p ${HomeFolder} && chown -R ${VSCodeUser}:${VSCodeUser} ${HomeFolder}
Tags:
- Key: Name
Value: !Ref InstanceName
VSCodeInstanceCachePolicy:
Type: AWS::CloudFront::CachePolicy
Properties:
CachePolicyConfig:
DefaultTTL: 86400
MaxTTL: 31536000
MinTTL: 1
Name: !Sub
- ${InstanceName}-${RandomGUID}
- RandomGUID: !Select [0, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId ]]]]
ParametersInCacheKeyAndForwardedToOrigin:
CookiesConfig:
CookieBehavior: all
EnableAcceptEncodingGzip: False
HeadersConfig:
HeaderBehavior: whitelist
Headers:
- Accept-Charset
- Authorization
- Origin
- Accept
- Referer
- Host
- Accept-Language
- Accept-Encoding
- Accept-Datetime
QueryStringsConfig:
QueryStringBehavior: all
CloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Metadata:
cfn_nag:
rules_to_suppress:
- id: W10
reason: CloudFront Distribution access logging would require setup of an S3 bucket and changes in IAM, which add unnecessary complexity to the template
- id: W70
reason: Workshop Studio does not include a domain that can be used to provision a certificate, so it is not possible to setup TLS. See PFR EE-6016
Properties:
DistributionConfig:
Enabled: True
HttpVersion: http2and3
CacheBehaviors:
- AllowedMethods:
- GET
- HEAD
- OPTIONS
- PUT
- PATCH
- POST
- DELETE
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html#managed-cache-policy-caching-disabled
Compress: False
OriginRequestPolicyId: 216adef6-5c7f-47e4-b989-5492eafa07d3 # Managed-AllViewer - see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html#:~:text=When%20using%20AWS,47e4%2Db989%2D5492eafa07d3
TargetOriginId: !Sub CloudFront-${AWS::StackName}
ViewerProtocolPolicy: allow-all
PathPattern: '/proxy/*'
DefaultCacheBehavior:
AllowedMethods:
- GET
- HEAD
- OPTIONS
- PUT
- PATCH
- POST
- DELETE
CachePolicyId: !Ref VSCodeInstanceCachePolicy
OriginRequestPolicyId: 216adef6-5c7f-47e4-b989-5492eafa07d3 # Managed-AllViewer - see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html#:~:text=When%20using%20AWS,47e4%2Db989%2D5492eafa07d3
TargetOriginId: !Sub CloudFront-${AWS::StackName}
ViewerProtocolPolicy: allow-all
Origins:
- DomainName: !GetAtt VSCodeInstance.PublicDnsName
Id: !Sub CloudFront-${AWS::StackName}
CustomOriginConfig:
OriginProtocolPolicy: http-only
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Metadata:
cfn_nag:
rules_to_suppress:
- id: F1000
reason: All outbound traffic should be allowed from this instance. The EC2 instance is provisioned in the default VPC, which already has this egress rule, and it is not possible to duplicate this egress rule in the default VPC
Properties:
GroupDescription: SG for VSCodeServer - only allow CloudFront ingress
SecurityGroupIngress:
- Description: Allow HTTP from com.amazonaws.global.cloudfront.origin-facing
IpProtocol: tcp
FromPort: 80
ToPort: 80
SourcePrefixListId: !FindInMap [AWSRegionsPrefixListID, !Ref 'AWS::Region', PrefixList]
VSCodeHealthCheckLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: !Sub lambda.${AWS::URLSuffix}
Action: sts:AssumeRole
ManagedPolicyArns:
- !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
VSCodeHealthCheckLambda:
Type: AWS::Lambda::Function
Metadata:
cfn_nag:
rules_to_suppress:
- id: W58
reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html
- id: W89
reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity
- id: W92
reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity
Properties:
Description: Run health check on VS code-server instance
Handler: index.lambda_handler
Runtime: python3.12
MemorySize: 128
Timeout: 600
Environment:
Variables:
RetrySleep: 2900
AbortTimeRemaining: 5000
Architectures:
- arm64
Role: !GetAtt VSCodeHealthCheckLambdaRole.Arn
Code:
ZipFile: |
import json
import cfnresponse
import logging
import time
import os
import http.client
from urllib.parse import urlparse
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def healthURLOk(url):
# Using try block to catch connection errors and JSON conversion errors
try:
logger.debug(f'url: {url}')
parsed_url = urlparse(url)
if parsed_url.scheme == 'https':
logger.debug(f'Trying https: {parsed_url.netloc}. Parsed_url: {parsed_url}')
conn = http.client.HTTPSConnection(parsed_url.netloc)
else:
logger.debug(f'Trying http: {parsed_url.netloc}. Parsed_url: {parsed_url}')
conn = http.client.HTTPConnection(parsed_url.netloc)
conn.request("GET", parsed_url.path or "/")
response = conn.getresponse()
logger.debug(f'response: {response}')
logger.debug(f'response.status: {response.status}')
content = response.read()
logger.debug(f'content: {content}')
# This will be true for any return code below 4xx (so 3xx and 2xx)
if 200 <= response.status < 400:
response_dict = json.loads(content.decode('utf-8'))
logger.debug(f'response_dict: {response_dict}')
# Checking for expected keys and if the key has the expected value
if 'status' in response_dict and (response_dict['status'].lower() == 'alive' or response_dict['status'].lower() == 'expired'):
# Response code 200 and correct JSON returned
logger.info(f'Health check OK. Status: {response_dict['status'].lower()}')
return True
else:
# Response code 200 but the 'status' key is either not present or does not have the value 'alive' or 'expired'
logger.info(f'Health check failed. Status: {response_dict['status'].lower()}')
return False
else:
# Response was not ok (error 4xx or 5xx)
logger.info(f'Healthcheck failed. Return code: {response.status}')
return False
except http.client.HTTPException as e:
# URL malformed or endpoint not ready yet, this should only happen if we can not DNS resolve the URL
logger.error(e, exc_info=True)
logger.error(f'Healthcheck failed: HTTP Exception. URL invalid and/or endpoint not ready yet')
return False
except json.decoder.JSONDecodeError as e:
# The response we got was not a properly formatted JSON
logger.error(e, exc_info=True)
logger.info(f'Healthcheck failed: Did not get JSON object from URL as expected')
return False
except Exception as e:
logger.error(e, exc_info=True)
logger.info(f'Healthcheck failed: General error')
return False
finally:
if 'conn' in locals():
conn.close()
def is_valid_json(json_string):
try:
json.loads(json_string)
return True
except ValueError:
return False
def lambda_handler(event, context):
logger.debug(f'event: {event}')
logger.debug(f'context: {context}')
try:
if event['RequestType'] != 'Create':
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take')
else:
sleep_ms = int(os.environ.get('RetrySleep'))
abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining'))
resource_properties = event['ResourceProperties']
url = resource_properties['Url']
logger.info(f'Testing url: {url}')
time_remaining_ms = context.get_remaining_time_in_millis()
attempt_no = 0
health_check = False
while (attempt_no == 0 or (time_remaining_ms > abort_time_remaining_ms and not health_check)):
attempt_no += 1
logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s')
health_check = healthURLOk(url)
if not health_check:
logger.debug(f'Healthcheck failed. Sleeping: {sleep_ms/1000}s')
time.sleep(sleep_ms/1000)
time_remaining_ms = context.get_remaining_time_in_millis()
if health_check:
logger.info(f'Health check successful. Attempts: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s')
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='VS code-server healthcheck successful')
else:
logger.info(f'Health check failed. Timed out. Attempts: {attempt_no}. Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s')
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='VS code-server healthcheck failed. Timed out after ' + str(attempt_no) + ' attempts')
logger.info(f'Response sent')
except Exception as e:
logger.error(e, exc_info=True)
logger.info(f'Health check failed. General exception')
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e))
Healthcheck:
Type: Custom::VSCodeHealthCheckLambda
Properties:
ServiceToken: !GetAtt VSCodeHealthCheckLambda.Arn
ServiceTimeout: 610
Url: !Sub https://${CloudFrontDistribution.DomainName}/healthz
CheckSSMDocLambda:
Type: AWS::Lambda::Function
Metadata:
cfn_nag:
rules_to_suppress:
- id: W58
reason: Warning incorrectly reported. The role associated with the Lambda function has the AWSLambdaBasicExecutionRole managed policy attached, which includes permission to write CloudWatch Logs. See https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaBasicExecutionRole.html
- id: W89
reason: CloudFormation custom function does not need the scaffolding of a VPC, to do so would add unnecessary complexity
- id: W92
reason: CloudFormation custom function does not need reserved concurrent executions, to do so would add unnecessary complexity
Properties:
Description: Check SSM document on EC2 instance
Handler: index.lambda_handler
Runtime: python3.12
MemorySize: 128
Timeout: 600
Environment:
Variables:
RetrySleep: 2900
AbortTimeRemaining: 5000
Architectures:
- arm64
Role: !GetAtt SSMDocLambdaRole.Arn
Code:
ZipFile: |
import boto3
import cfnresponse
import logging
import time
import os
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.debug(f'event: {event}')
logger.debug(f'context: {context}')
if event['RequestType'] != 'Create':
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='No action to take')
else:
sleep_ms = int(os.environ.get('RetrySleep'))
abort_time_remaining_ms = int(os.environ.get('AbortTimeRemaining'))
resource_properties = event['ResourceProperties']
instance_id = resource_properties['InstanceId']
document_name = resource_properties['DocumentName']
logger.info(f'Checking SSM Document {document_name} on EC2 instance {instance_id}')
retry = True
attempt_no = 0
time_remaining_ms = context.get_remaining_time_in_millis()
ssm = boto3.client('ssm')
while (retry == True):
attempt_no += 1
logger.info(f'Attempt: {attempt_no}. Time Remaining: {time_remaining_ms/1000}s')
try:
# check to see if document has completed running on instance
response = ssm.list_command_invocations(
InstanceId=instance_id,
Details=True
)
logger.debug(f'Response: {response}')
for invocation in response['CommandInvocations']:
if invocation['DocumentName'] == document_name:
invocation_status = invocation['Status']
if invocation_status == 'Success':
logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} complete. Status: {invocation_status}')
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData={}, reason='OK')
retry = False
elif invocation_status == 'Failed' or invocation_status == 'Cancelled' or invocation_status == 'TimedOut':
logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} failed. Status: {invocation_status}')
reason = ''
# Get information on step that failed, otherwise it's cancelled or timeout
for step in invocation['CommandPlugins']:
step_name = step['Name']
step_status = step['Status']
step_output = step['Output']
logger.debug(f'Step {step_name} {step_status}: {step_output}')
if step_status != 'Success':
try:
response_step = ssm.get_command_invocation(
CommandId=invocation['CommandId'],
InstanceId=instance_id,
PluginName=step_name
)
logger.debug(f'Step details: {response_step}')
step_output = response_step['StandardErrorContent']
except Exception as e:
logger.error(e, exc_info=True)
logger.info(f'Step {step_name} {step_status}: {step_output}')
if reason == '':
reason = f'Step {step_name} {step_status}: {step_output}'
else:
reason += f'\nStep {step_name} {step_status}: {step_output}'
if reason == '':
reason = f'SSM Document {document_name} on EC2 instance {instance_id} failed. Status: {invocation_status}'
logger.info(f'{reason}')
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=reason)
retry = False
else:
logger.info(f'SSM Document {document_name} on EC2 instance {instance_id} not yet complete. Status: {invocation_status}')
retry = True
if retry == True:
if (time_remaining_ms > abort_time_remaining_ms):
logger.info(f'Sleeping: {sleep_ms/1000}s')
time.sleep(sleep_ms/1000)
time_remaining_ms = context.get_remaining_time_in_millis()
else:
logger.info(f'Time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s')
logger.info(f'Aborting check as time remaining {time_remaining_ms/1000}s < Abort time remaining {abort_time_remaining_ms/1000}s')
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason='Timed out. Time remaining: ' + str(time_remaining_ms/1000) + 's < Abort time remaining: ' + str(abort_time_remaining_ms/1000) + 's')
retry = False
except Exception as e:
logger.error(e, exc_info=True)
cfnresponse.send(event, context, cfnresponse.FAILED, responseData={}, reason=str(e))
retry = False
CheckVSCodeSSMDoc:
Type: Custom::CheckSSMDocLambda
DependsOn: Healthcheck
Properties:
ServiceToken: !GetAtt CheckSSMDocLambda.Arn
ServiceTimeout: 610
InstanceId: !Ref VSCodeInstance
DocumentName: !Ref VSCodeSSMDoc
Outputs:
URL:
Description: VSCode-Server URL
Value: !Sub https://${CloudFrontDistribution.DomainName}/?folder=${HomeFolder}
Password:
Description: VSCode-Server Password
Value: !GetAtt SecretPlaintext.password
@brainstorm
Copy link

Useful, thanks Angel! Here's my UserData bit for OncoAnalyser w/ Aarch64 support:

# ----- cut here for instance userscript ----- #
sudo yum -y install git java-23-amazon-corretto-devel java-23-amazon-corretto tmux htop gcc g++ gdb zlib-devel zlib wget patch pigz
cd /opt && sudo wget https://services.gradle.org/distributions/gradle-8.13-bin.zip && sudo unzip gradle-8.13-bin.zip && sudo rm gradle-8.13-bin.zip && sudo ln -sf /opt/gradle-8.13/ /opt/gradle && export PATH=$PATH:/opt/gradle/bin && cd $HOME
wget https://github.com/aristocratos/btop/releases/download/v1.4.0/btop-aarch64-linux-musl.tbz && tar xvfj btop-aarch64-linux-musl.tbz && sudo mv btop/bin/btop /usr/local/bin && rm -rf btop*
wget https://kojipkgs.fedoraproject.org//packages/pbzip2/1.1.13/1.el8/aarch64/pbzip2-1.1.13-1.el8.aarch64.rpm && sudo rpm -i *.rpm && rm *.rpm
git clone https://github.com/umccr/oncoanalyser-arm64 oncoanalyser && cd oncoanalyser && git checkout pipeline_v6.0_aarch64
curl -s https://get.nextflow.io/ | bash
chmod +x nextflow && sudo mv nextflow /usr/local/bin

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
source ~/miniconda3/bin/activate
conda init --all
# ----- cut here for instance userscript ----- #

# Interactive dev usage follows

tmux
nextflow run . -resume -profile conda,test --outdir ../testrun-output-pipeline_v6.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment