Skip to content

Instantly share code, notes, and snippets.

View nov05's full-sized avatar
πŸ’­
Homo Sapiens

nov05

πŸ’­
Homo Sapiens
View GitHub Profile

Here’s the extracted OCR text from your PDF, β€œThe Quantum Eraser”:


The Quantum Eraser

1. Initial Superposition State

🟒 ChatGPT output (could be wrong. verify carefully)

To export SAP HANA data in real time continuously, you can use several methods depending on the target system and the purpose. Here are some of the most common approaches:

1. Smart Data Integration (SDI)

  • Use Case: Real-time data replication and transformation.
  • How: SDI allows you to create real-time data replication tasks between SAP HANA and other systems. You can define data flows that continuously export data from HANA and send it to another system, such as another HANA instance or a non-HANA database.
  • Steps:
    1. Set up a Data Provisioning Agent.
  1. Configure the SDI connection to the target system.
  • ☝️ Check my Google Docs


🟒⚠️ It is hard to ROSLaunch that Gazebo world that I created via udacity_office.launch.

⚠️ Issue

(awsmle_py310) PS D:\github\udacity-cd13926-Building-Apps-Amazon-Bedrock-exercises\Experiments> aws bedrock invoke-model `
>> --model-id anthropic.claude-3-5-sonnet-20240620-v1:0 `
>> --body file://claude_input.json output.json

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

βœ…βœ…βœ… My working code: Create WebDataset from local data files to local .tar files

## example code for webdataset
import webdataset as wds
import io
import json
  • ⚠️🟒 Issue: training error
[1,mpirank:0,algo-1]<stderr>:../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
[1,mpirank:0,algo-1]<stderr>:../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6[1,mpirank:0,algo-1]<stderr>:,0,0] Assertion `t >= 0 && t < n_classes` failed.
[1,mpirank:0,algo-1]<stderr>:../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
...
[1,mpirank:1,algo-2]<stdout>:  File "train.py", line 675, in <module>
[1,mpirank:1,algo-2]<stdout>:    main(task)
[1,mpirank:1,algo-2]<stdout>:  File "train.py", line 572, in main
  • Uninstall all VS Code extensions
    Delete C:\Users\*\.vscode\extensions folder
    Reinstall extensions

  • Remove Jupyter kernels

(base) PS D:\github\udacity-nd009t-capstone-starter> jupyter kernelspec list
Available kernels:

To apply distributed training for the AWS SageMaker Linear Learner algorithm, you would typically rely on SageMaker's built-in distributed training capabilities. The Linear Learner algorithm supports distributed training by scaling across multiple instances and using multiple GPUs or CPU cores.

How to Apply Distributed Training for Linear Learner Algorithm in SageMaker

1. Using SageMaker Pre-built Containers with Distributed Training

SageMaker Linear Learner algorithm provides a straightforward approach to use distributed training across multiple instances by setting the instance_count parameter to more than 1.

Steps: