Camera makers and film processors have literally dealt with this topic since the invention of the film, and people have stopped processing it themselves. We have developed from ambient light measuring devices with solar cells to incredibly complex databases with exposures, which were recorded and analyzed by multi-point arrays in a "pre-flash" pulse.
And yet … we are still taking hose shots.
With most light meters, the image is probably about “beautifying” the image. This compensates for exposure deficits and poor lighting conditions by actively changing the gain, saturation and offset bias for different parts of the image. These are not so much measurements as interpretations of the collected photons, but still
There was a concept called "paxelization" that I didn't hear outside of Kodak. It was comparable to modern, down-scaled images of machine learning. This pixel size was small, but was used by the various algorithms to predict the ideal exposure (for film and digital).
Yes, cameras can do more than expected, and the software that renders your image or prints your film (digitally or optically) can. It’s really amazing.
I'm short of the answer here because literally books have been written about measuring and printing. If you are interested in photography, please read Ansel Adam's books (2 of 3), the camera and the negative. The print book is also useful and contains detailed information on how to apply what you have learned, but is not as relevant to your interest in measuring systems.