Context

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    Like DNA, fingerprint analysis isn’t guaranteed to have an infinite number of patterns. It doesn’t need to, either. It’s all down to probabilities. Fingerprints are made up of a limited amount of cells that can be arranged in a limited amount of ways, so by no mathematical definition are there ever going to be infinite fingerprints.

    For example, depending on the quality of a DNA sample, the probability of someone innocent matching a criminal’s DNA is never 0. Not every gene is sampled, because based on the length of certain chains, that vary from person to person, we can already find ancestry and pretty much uniquely identify people. We don’t need to check the rest of the DNA when only a small set of spots is sufficient. The probability of two (non-twin) siblings sharing all the exact same DNA properties is abysmally small, let alone the probability of some random person who has nothing to do with it.

    With a shitty DNA sample, no means, no motive, and an alibi, even DNA evidence shouldn’t be enough to convict someone (though unfortunately many don’t realise DNA evidence is fallible).

    With fingerprints, the story is the same, except now twins have unique identifiers too. Do you need unique fingerprints when the chance of two people sharing the same fingerprint is one in 64 billion, but the suspect also happened to be at the crime scene at the night of the crime? I’d argue that the combination should be enough. This also depends on the quality of the fingerprints found, of course; there are certain standard features that tend to get analysed (little loops and such) and only when there are enough of those features can you actually start matching fingerprints to people.

    In practice, we not every fingerprint recorded will have enough of those features. A crime scene won’t get perfect 12+ features per print that you would need for a “perfect” match. The fewer festures recorded, the less conclusive the evidence becomes.

    For technological purposes, like fingerprint scanners, this is irrelevant; the system can simply reject you until enough features are recorded to do a proper comparison. Tech can also choose to allow matching fewer features, because there’s a difference in the required tolerance between “locked up for life for a crime you didn’t commit” and “friend unlocks your phone”. Apple, for example, estimates that the probability to bypass their fingerprint readers with an “identical” one is one in 50,000; that’s awfully unprecise for criminals investigations, but good enough for consumer electronics.

    The thing is, we haven’t really found proof of two unrelated people with the same fingerprints, and we haven’t found many people with duplicate DNA properties either. Perhaps we will, if large governments starts sharing databases, but even then the fingerprint mismatch will probably be between people from entirely different continents.

    The latest research pokes a hole in the theory that fingerprints for the different fingers on a hand are guaranteed to be unique. Generally, the estimation seems to be that the probability of having an equal “fingerprint” using current techniques is about one in 64 billion.

    As far as we know today, the probability of someone having the exact same fingerprints (i.e. ignoring the festures we use for identification and also comparing the individual curves) is too small to fathom. If fingerprint matching based on features alone ever proves to be flawed, we can fall back to (more difficult to attain) picture perfect matches.