Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.
To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.
“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.
Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.
“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”
In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.
This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.
You can subscribe to this community from any Kbin or Lemmy instance:
Check out our website at privacyguides.org before asking your questions here. We’ve tried answering the common questions and recommendations there!
Want to get involved? The website is open-source on GitHub, and your help would be appreciated!
This community is the “official” Privacy Guides community on Lemmy, which can be verified here. Other “Privacy Guides” communities on other Lemmy servers are not moderated by this team or associated with the website.
Moderation Rules:
Additional Resources:
“…locally on device without connecting to the internet”
How would it then report such behavior to Google, without internet?
If it notifies the end user, what good does that do? My phone is at my ear, I don’t stop a conversation when another app sends a notification while I’m on a call.
This will 100% report things in the background to Google.
It doesn’t
You can’t see why it might be helpful for a user to know that they’re speaking to a scammer?
I assume it means the “AI” bit is running locally (for cost/efficiency reasons and so your actual voice isn’t uploaded) the results are then uploaded wherever (which is theoretically better but still hugely open to abuse)
My bet is it will work like their federated text prediction in gboard.
There are a few ways this could work, but it hardly seems worth the effort if it’s not phoning home.
They could have an on-device database of red flags and use on-device voice recognition against that database. But then what? Pop up a “scam likely” screen while you’re already mid-call? Maybe include an option to report scams back to Google with a transcript? I guess that could be useful.
Any more more than that would be a privacy nightmare. I don’t want Google’s AI deciding which of my conversations are private and which get sent back to Google. Any non-zero false positive rate would simply be unacceptable.
Maybe this is the first look at a new cat and mouse game: AI to detect AI-generated voices? AI-generated voice scams are already out there in the wild and will only become more common as time goes on.
You’re putting a very large amount of trust on something which may simply require the flip of a switch to add the specified information to be sent back to Google along with all the regular heavy telemetry already feeding back…
Mega hot take on this site: I have no trust in Google