To authenticate using IAM create the cluster with:
ClientAuthentication:
Sasl:
Iam:
#include <iostream> | |
#include <random> | |
#include <chrono> | |
#include <vector> | |
#include <iomanip> | |
using namespace std; | |
const u_int32_t FLOAT32_EXPONENT_MASK = 0xC0880000; // -0x3F780000 |
def generate_hash_files(dir): | |
import os | |
for root, _, filenames in os.walk(dir): | |
for filename in filenames: | |
if filename.endswith(".safetensors") or filename.endswith(".ckpt") or filename.endswith(".pt"): | |
fpath = os.path.join(root, filename) | |
if os.path.exists(fpath + ".sha256"): | |
print("Hash file already exists for", fpath, "- skipping") | |
else: | |
print("Generating hash for", fpath) |
# tkdr; this requires specific versions of c++ and cuda to work. just follow the links and run the required commands in CMD | |
# Install Visual studio 2022 | |
# with "MSVC 143 - VS 2022 x64/x86 build tools (v14.39-17.9)" "Windows 11 SDK (10.0.22621.0)" | |
# Download CUDA 12.1 from https://developer.nvidia.com/cuda-12-1-0-download-archive | |
# Some pip dependencies will be built at runtime, so make sure you have the right compilers installed | |
# To activate build compilers, run the next lines in CMD changing "Community" to whatever visual studio distribution you are using | |
# CALL "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.39 10.0.22621.0 |
AWSTemplateFormatVersion: '2010-09-09' | |
Transform: AWS::Serverless-2016-10-31 | |
Parameters: | |
LambdaName: | |
Type: String | |
Description: "Required name for the Lambda function" | |
CognitoUserPoolName: | |
Type: String |
# winget install Spicetify.Spicetify | |
cd "$(spicetify -c | Split-Path)" | |
git clone "https://github.com/spicetify/spicetify-themes.git" --depth 1 spicetify-themes | |
cd spicetify-themes | |
Get-ChildItem -Directory | ForEach-Object { | |
$destination = Join-Path -Path "$(spicetify -c | Split-Path)\Themes\" -ChildPath $_.Name | |
New-Item -ItemType SymbolicLink -Path $destination -Target $_.FullName | |
} |
List of valid properties to query for the switch "--query-gpu": | |
"timestamp" | |
The timestamp of when the query was made in format "YYYY/MM/DD HH:MM:SS.msec". | |
"driver_version" | |
The version of the installed NVIDIA display driver. This is an alphanumeric string. | |
Section about vgpu_driver_capability properties | |
Retrieves information about driver level caps. |
url = 'https://bedrock.us-east-1.amazonaws.com/model/amazon.titan-tg1-large/invoke' | |
headers = {'Content-Type': 'application/json'} | |
data = {"inputText": prompt} | |
from botocore.awsrequest import AWSRequest | |
from botocore.auth import SigV4Auth | |
from botocore.credentials import Credentials | |
import requests | |
import json | |
import boto3 | |
req = AWSRequest(method='POST', url=url, data=json.dumps(data), headers=headers) |
For role based access from EKS, we may approach this in two ways. The easiest way to do this is to attach the required policy directly to the underlying node (or nodegroup used in the cluster)
But this is not easily replicable in fargate pods. Also there is no way to contol access to specific policies for specific namespaces or pods in the cluster.
Another way to do this is through service accounts
(which is recommended best practice for kubernetes).
This does require the kubernetes OIDC provider to be registed to the eks account. If it is an eks cluster it can be easily done with