In C++, std::vector<bool> does pretty much the same thing under the hood.
Sadly, C++ standard library API designers made it impossible to directly access the array of integers stored inside std::vector<bool>. This made serialization/de-serialization of these vectors very inefficient, sometimes prohibitively so.
Reminds me of the extensive use of bitsets in Starfox Adventures. Not only was object spawning tied to flags, many dynamic effects were too so if you prodded the flags in memory often a lot of funky things would happen like explosions getting triggered and cutscenes.
How's this notable? It’s literally the most straight-forward, trivial bitset implementation.
> The most novel aspect of OoT bitsets is that the first 4 bits in the 16-bit coordinate IDs index which bit is set.
Am I the only one who was trouble parsing this sentence correctly?
I used to work at a Company where I did parking software and it was based on Borland C++ Builder 6 codebase. Up to 2012, when I left, we were using Bit flags to indicate configuration settings. We would store these settings in a config file as binary and recall it back when the application started up. We stored a ton of configuration in each flag and it was easy to add more.
But in retrospect, it wasn't super great if the disc or installation somehow got corrupted. You would lose your entire configuration and parking operators do not have the best IT.
This way of storing configuration was designed back in the 90s at this Company and it survived over 20 years. This was prior to this Zelda game and I never played it.
How is this in any way novel? People have been doing this since the dawn of time.
Darn. By "compact", I thought there'd be some clever sparsity or compression to get below 1 bit/flag in common cases.
This seems quite nice. Simple bit-shifting of the 16-bit ID gives both the index into the array of 16-bit words and the bit in the word and allows up to 65536 bitflags. This makes for a very efficient and easy to implement bitflag set/check system given these constraints.
It is hard to understand how it works from the explanation, but seems to be only useful for really sparse bitmaps. But in a general case, I don't see how it is better than roaring bitmaps.
I love this one! Thanks for sharing.
It's not about the bitset itself. It's about how to organize and think about your flags.
The small visualization grid is fantastic for debugging, and the `word:bit` structure lends itself directly to organize your data in categories and display them as such.
OpenAI o3's writing style is quite distinct :)
So there is no compression?
> The most novel aspect of OoT bitsets is that the first 4 bits in the 16-bit coordinate IDs index which bit is set. For example, 0xA5 shows that the 5th bit is set in the 10th word of the array of 16-bit integers. This only works in the 16-bit representation! 32 bit words would need 5 bits to index the bit, which wouldn't map cleanly to a nibble for debugging.
There is nothing novel about this really. It's neat that it works with hexadecimal printing to directly read off the sub-limb index but honestly who cares about that.
Outside of that observation there's no advantage to 16-bit limbs and this is just a bog-standard bitset where the first k bits indicate the position within the 2^k bit limb, and the remaining bits give you the limb index.