I am trying to implement my own version of the Random class from the python standard library. I can generate random bits and I implemented the getrandbits(n) function. But the superclass does not use this function to calculate the returned float from random(). So I have to implement that by myself:
def random(self):
exp = 0x3FF0000000000000
mant = self.getrandbits(52)
return struct.unpack("d", struct.pack("q", exp^mant))[0]-1.0
I am using a sign of 0 (positive), an exponent of 1023 (2^0 = 1) and a random mantisse. So I get a number from [1.0, 2.0). The random() function must return a number in [0.0, 1.0) so I subtract 1.0 before returning. As I'm not an expert when it comes to floating point numbers I'm not sure this is done the right way. Don't I lose precision by subtracting? Can I build the number from random bits so that it's in [0.0, 1.0) without subtraction?