I am going for possibly feasible, or definitely infeasible. If ten samples might be enough we have an answer. If less than a million will never be enough then we have an answer.

Put this down as **definitely infeasible**, then (at least unless you come up with a very accurate, extremely lossy, *very* clever mathematical “flattening” of the fingerprint scans.)

Fingerprint scanning is an analog process: it’s essentially a (potentially very poor quality) **photograph**; according to some website, these may reasonably be as low as 96×96 pixels.

If we take the coarse assumptions that each of these pixels can be reduced to a bit depth of around 3 (that is, around 8 possible brightness levels), that 95% of a 96×96px scan’s pixels are fixed by a finger (or fix-able by your mathematical analysis), and that the finger will only move up to ±10px in each of the X and Z axes, that gives you approximately too many possible images that a given fingerprint might produce.

If you want to find a way to hash “fingerprints”, you will *have* to find a serious analytic reduction of them based on pivotal attributes, and go about hashing that *mathematical characterization* of each instead.