Ceph Object Storage user management involves managing users who access the Ceph Object Storage service, not the Ceph Object Gateway itself. To allow end users to interact with Ceph Object Gateway services, create a user along with an access key and secret key. Users can also be organized into Accounts for easier management.
# Required to run alias command only 1 time
alias radosgw-admin="mircoceph.radosgw-admin"
# Create an admin user
sudo radosgw-admin user create --uid=rgw-admin-ops-user --display-name="RGW Admin Ops User" --caps="buckets=*;users=*;usage=read;metadata=read;zone=read" --rgw-zonegroup=default --rgw-zone=default

This can be done from any system within your private network boundary or once the user has reached inside it from something like a Bastion host. You can use any Operating system that AWS CLI supports. For example I am going to do this on my personal Mac and this Ceph cluster is deployed in my home's private network.
- Install AWS CLI following official docs on the Operating System of your choice.

- Configure AWS CLI using your object gateway endpoint
# First find out the IP address of the node that is hosting the Ceph's RGW Service for Object Gateway
# This can be done from any node within the ceph cluster or from the dashboard (if you enabled it)
sudo microceph.ceph status
sudo microceph status

# Next we configure AWS CLI on any machine reachable to this gateway within the network
aws --profile rgw-admin-ops-user configure set endpoint_url http://<IP-Address-ofYour-ObjectGateway>
aws --profile rgw-admin-ops-user configure
AWS Access Key ID [None]: {root access key}
AWS Secret Access Key [None]: {root secret key}
Default region name [None]: default
Default output format [None]: json
- Let's create a bucket and upload some objects
# Lets check for existing buckets. Shouldn't give us any response
aws --profile rgw-admin-ops-user s3 ls
# Let's create a S3 bucket and validate
aws --profile rgw-admin-ops-user s3api create-bucket --bucket bucket-test
aws --profile rgw-admin-ops-user s3 ls
# Let's upload some files from a desired directory
cd <desired-directory>
aws --profile rgw-admin-ops-user s3 cp ./ s3://bucket-test/ --recursive
# Let's validate that the data was uploaded successfully
aws --profile rgw-admin-ops-user s3 ls s3://bucket-test

And voila! You have a functional S3 API endpoint that you can use AWS CLI tools with.