schrödinger

feed me, seymour

  • Google wrongly labels as child abuse photos that father emails to doctor on request

    https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

    The nurse said to send photos so the doctor could review them in advance.

    Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

    [...]

    Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”

    The photos were automatically uploaded from his phone to his Google account...

    [...]

    A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the user’s account, searches for other exploitative material and, as required by federal law, makes a report to the CyberTipline at the National Center for Missing and Exploited Children.

    Even after the police cleared him, Google has not returned his account, resulting in a loss of more than 10 years on data, contacts, emails, photos. Google has not given a statement/explanation.

    #privacy
    #artificial_intelligence