It feels like we have a new privacy threat that’s emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:
So for that third one…what do we do? Anything that’s online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you’ve provided? If you do care, do you think there’s any reasonable way we can fight back? Can we poison their training data somehow?
In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.
This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.
You can subscribe to this community from any Kbin or Lemmy instance:
Check out our website at privacyguides.org before asking your questions here. We’ve tried answering the common questions and recommendations there!
Want to get involved? The website is open-source on GitHub, and your help would be appreciated!
This community is the “official” Privacy Guides community on Lemmy, which can be verified here. Other “Privacy Guides” communities on other Lemmy servers are not moderated by this team or associated with the website.
Moderation Rules:
Additional Resources:
Don’t exactly see what they are doing wrong as long as they are using publically posted and available work. The ai is effectively learning by “seeing” the art/article. The only way it would be unethical is if they were using private stuff that they shouldn’t have access too. Claiming anything more gets into a weird increase of what intellectual property can and should do and I don’t like that. IP protections are already oppressive. I refuse to support that.
Exactly. If you’re posting something on the internet for the world to see, you can’t get upset when people or in this case AI read it.