A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent a collective shiver down the spines of privacy and security experts who are warning the feature represents the thin end of the wedge. They warn that, once client-side scanning is baked into mobile infrastructure, it could usher in an era of centralized censorship.

Apple abandoned a plan to deploy client-side scanning for CSAM in 2021 after a huge privacy backlash. However, policymakers have continued to heap pressure on the tech industry to find ways to detect illegal activity taking place on their platforms. Any industry moves to build out on-device scanning infrastructure could therefore pave the way for all-sorts of content scanning by default — whether government-led or related to a particular commercial agenda.

Meredith Whittaker, president of the U.S.-based encrypted messaging app Signal, warned: “This is incredibly dangerous. It lays the path for centralized, device-level client side scanning.

“From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w[ith] seeking reproductive care’ or ‘commonly associated w[ith] providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”

  • retrospectology@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Combined with how easy it is becoming to create an AI copy of a person’s voice you’re pretty soon not going to ever be sure if what you’re saying or hearing on a phone is actually what’s being said or if it’s being edited in real time. China’s gonna love this shit.

    Really hate tech bros who just keep recklessly pushing ahead on this stuff. Absolutely the worst scenario for AI.