Here's how I fixed the pygeoprocessing 2.3.3 release (with much help from James):
I uploaded an invalid sdist tarball to the 2.3.3 release on GitHub and PyPI.
I created a post release with the right sdist file. I repeated the entire release process with the version number 2.3.3.post0
: updating HISTORY.rst
, PRing that into main
, pushing the new tag, waiting for artifacts to build, and uploading them to Github and PyPI.
I then yanked the 2.3.3
release on PyPI. This seems to be the best practice when a release is broken: PEP 592
I left the 2.3.3
release as it was on Github because 2.3.3
still exists on PyPI, and it seems nice to keep the Github and PyPI releases in sync.
NOTE:
I didn't think I would need to update HISTORY.rst
with a new section for 2.3.3.post0
since no code changed.
But there's a bug in setuptools_scm
where it uses the earliest tag at a given revision. So with 2.3.3
and 2.3.3.post0
pointing at the same commit, 2.3.3
would be used as the version number in the artifact file names and metadata. I needed to make a new commit for 2.3.3.post0
to point to, and HISTORY.rst
is the obvious thing to change in that case.
This was a good use case for a post release because the code was not affected. Someone who installed pygeoprocessing 2.3.3 from one of the wheels (which were unaffected) would have the exact same pygeoprocessing implementation as someone who installed pygeoprocessing 2.3.3.post0. So there's no issue with reproducibility.
If the release issue did affect the code, and couldn't wait until the next release, the steps would be basically the same. It would just be a normal bugfix release instead of a post release. And you would need to create a new commit to fix the broken code, regardless of the setuptools_scm
issue.
https://www.python.org/dev/peps/pep-0440/#post-releases https://snarky.ca/what-to-do-when-you-botch-a-release-on-pypi/
It would be lovely to automate this process. Here's a start, using the Github CLI to download artifacts:
ARTIFACTS_DIR=2.3.3.post1-artifacts/
mkdir $ARTIFACTS_DIR
RUN_ID=$(gh run list --repo natcap/pygeoprocessing --workflow "Python distributions" --limit 1 | awk -F "\t" '{print $7}')
gh run download $RUN_ID --dir $ARTIFACTS_DIR
cd $ARTIFACTS_DIR
mv */* . # downloaded artifacts are nested within directories but not zipped
rm -r */ # remove the now-empty directories
twine upload -r testpypi * # upload all artifacts to TestPyPI