Is Google’s AI-driven image resizing algorithm ‘dishonest’?

The Stack reports on Google’s “new research into upscaling low-resolution images using machine learning to ‘fill in’ the missing details,” arguing this is “a questionable stance…continuing to propagate the idea that images contain some kind of abstract ‘DNA’, and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow.”

“Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling… This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.”

The article points out that “faith in the fidelity of these ‘enhanced’ images routinely convicts defendants.”

683