I've taken some time to look into bevy binary size, especially the somewhat absurd 40+ mb of strings which are part of release binaries (i used bloaty and cargo bloat to check the general state of whats big).
- panic_immediate_abort+build_std: doesn't help much at all, even though i noticed a lot of type names when paging through the strings
- codegen units = 1: doesn't help much either
- A very rough attempt at (diy) bisect led me to the conclusion that there wasnt a single cause of this, but binary sizes have just steadily grown since at least 0.15 (I didn't check further back).
- the bevy_utils/debug feature doesn't help much, only a few mb
Finally, to find where the strings are coming from, I wrote a python script which uses radare2 to disassemble any binary and outputs a speedscope.app compatible file which shows the functions accessing strings with the amount of strings they need, attempting to split the type names into flamegraph levels. This seems to only pick up on ~11mb of strings, likely because radare2 can't find the functions referencing the other strings for one reason or another, but it still provides some interesting insights. From paging through strings <binary>, It seems that type names are likely responsible, ones not caught by bevyengine/bevy#19558 (feature to enable/disable full type names in some places). Either that or things I wasn't able to identify.
Works better if debug info is included.
Prerequisites for the script: uv and radare2
Usage:
chmod +x ./bevy-size.py
./bevy-size.py ./bevy/target/release/examples/3d_sceneThen drag the resulting "collapsedstack.txt" (collapsed stack format) file on https://www.speedscope.app/ and select "Left Heavy" at the top (to get it to sum up spans)
Note: the script may overestimate the impact of a single string if it has multiple places its used. Its probably also bad in other ways, like not finding the area every string is being used.