-
Notifications
You must be signed in to change notification settings - Fork 1
Description
(This is to open room for discussion)
General context
So far, NI and NF have to be null or positive. This makes the range of any number equals to NI (how far it can go) and its precision equals to NF (how many values between 0 and 1). This privileges the use of numbers which are close to 1: very small numbers will end up with a lot of padding-zeros on the left and very big numbers will end up with a lot of padding-zeros on the right.
When looking at the algebra of fixed point arithmetics, NF is actually the scaling factor: how much shift is needed from the raw value to place the comma at the right place. So far, we have limited this scaling factor to 0, minimum value, which is what is standardly done, but why not allowing them to go below? Same goes with NI: 16 can be represented as 0b10000, do we need these padding zeros? could we say that it is 0b1 shifted 4 times to the left?
Idea
I think it could be very interesting to go in that area, first for generality, and second to be able to represent arbitrary small/big numbers with a low number of bits (this is actually what is done in floating point, to an extend…). How that would behave:
UFix<18,-2>would be able to represent up to 262143 but only with 65536 distinct values. The internal container would be 16 bits of course.UFix<-2,18>would be able to represent from 0 to 1/2/2 = 0.25 with 65536 distinct values (to compare with the 0 to 1 with 65535 values for theUFix<0,16>.
I think this could be useful to have, especially as this is a trick I had to resort to in one of the Mozzi example (the .sR<8>: I scaled things to big there for optimizations purposes and scale them back after computations).
I genuinely think that actually most of the code is already compliant. What remains could make a few knots though…
Am I overthinking it?