Facebook has been selling your data to ad companies since the day you created your account. This only changes what you visually see on the website. It makes absolutely zero difference from a data collection standpoint. Just consent so you can delete your accounts with less hassle. Filing GDPR complaints through email is a pain, takes a long time, and has no guarantee that they’ll actually accept it. Plus, some sites (likely including Facebook) will ask for a government ID to verify you live in an area where the GDPR applies. It isn’t worth the trouble when there are easier methods. Once you’re able to log in, you should be able to access a GDPR portal somewhere in case you still want to file a report before deleting your account, but it’s up to you if you want to go through the trouble. At the very least that saves you from having to write a letter and either email or mail it to them. With Facebook’s consistent history of violating GDPR, I honestly don’t even feel like it’s worth it to try. Chances are that your data will still be sold regardless. Just look at all the lawsuits against Facebook for GDPR violations in the past years.
I’d argue SimpleX does it better, they’re even modifying the Signal protocol to support post-quantum encryption. No phone number, uses the Signal protocol, and has no user identifiers at all (no usernames, no account numbers, no account at all; everything is stored locally on your device).
Oh also, before the reply that Signal is post-quantum already, here’s an excerpt from the blog post I linked detailing why SimpleX’s implementation is better:
unlike Signal design that only added quantum resistance to the initial key exchange by replacing X3DH key agreement scheme with post-quantum PQXDH, but did not improve Signal algorithm itself, our design added quantum-resistant key agreements inside double algorithm, making its break-in recovery property also quantum resistant.
There is much more detail in the blog post if you’re interested. SimpleX also has an incredible whitepaper
The best options for you are going to be SimpleX Chat or Jami, depending on your use case. If you only need to make video calls, probably Jami is the easier option, but if you’d like to have a chat app with video call support, SimpleX is the right choice. SimpleX is also just a really good messaging app, because it has no user identifiers or accounts. They have a wonderful explanation of their method for two-way communication in their whitepaper if you’re interested.
Also, video calling in a “secure environment”, as you’ve stated, is not difficult in the slightest, and absolutely not impossible. There are plenty of options available. Others beyond the ones I gave are Jitsi (but it’s gone way downhill; don’t use it), Signal, Element (you do NOT have to self-host for it, you can use the main instance or any other instance), and the options open up to basically everything if you make a new user profile and install sandboxed Google Play Services in the new user profile (from the “Apps” app). With sandboxed Google Play, you can use apps like Zoom if you really wanted, but I’d strongly encourage you not to for the sake of privacy. You can download apps without signing into a Google account via the Aurora Store.
Yes, of course GrapheneOS can run SimpleX! SimpleX has no dependence on the Google Services Framework, and even for apps that do have GSF dependence, they can be run with sandboxed GSF. The only apps that don’t work on GrapheneOS are apps that try to use the SafetyNet, which is mostly banking apps, or those that require GSF to have deep root privilege to operate.
Yes, that is exactly where perfect forward secrecy fails in Element. It allows all of the message keys to be downloaded by attacking a single point of failure. Perfect forward secrecy would necessitate that all messages and their encryption keys be completely independent, and each message would need to be broken one-by-one, as each key is completely different. What Element does with their cloud backup solution is it adds a single point of failure that results in every single message being compromised, without physical access to any device. Real perfect forward secrecy would make that impossible, as you have to break the encryption of every message independently (again, ignoring physical access to the device, because the device will always have access to all the messages anyway). It essentially invalidates many of the benefits of using a double-rachet key exchange protocol to begin with, as you can attack a single point of failure that would compromise all messages instead.
Granted, whether or not that matters to you is entirely up to you. I’m just clarifying that Element lacks perfect forward secrecy, so I have an ideological objection to my own personal use of it for anything sensitive, since there are more secure messengers out there (like SimpleX) that do have perfect forward secrecy, and many more security and privacy features (like the whole no user identifiers thing and no server side storage with SimpleX). That does of course come with the tradeoff that you can only use it on one device at a time, but everything is a list of pros and cons. Is anyone going to target you and attack you by attempting to gain access to your cloud backup keys? No, most certainly not. But the fact that it exists as an attack vector to begin with is troubling from a security perspective (again, that’s where SimpleX shines with all data being stored locally, so there is no way to access those messages on demand without physical access to the device). I personally think that the metadata issues are much worse with Matrix from an immediate privacy perspective, as that is an avenue that can be actively exploited in a much easier capacity.
If I understand correctly though, I believe we’re both on the same page. Element is still a much better option than something like Discord, but it is not without its own flaws.
The idea with perfect forward secrecy is that by breaking one key, you aren’t able to read all the other messages. The way Element works (allowing users to share encryption keys for messages stored server-side across devices, using a shared storage system), allows for a single key to allow access to all messages. All you need is your backup phrase (or a valid login session), and suddenly not just one message is visible, but all messages are. That is fundamentally in complete opposition to perfect forward secrecy.
The way to work around this is by storing all messages locally so they cannot be decrypted simply with server access, but Element stores messages on their servers, not locally (like SimpleX does, for instance). That would allow robust backup and syncing without breaking PFS.
Do you have any sources for that, preferably their own documentation
https://github.com/element-hq/element-meta/issues/1296
I got that from the privacyguides.org website, at:
https://www.privacyguides.org/en/real-time-communication/#element
If you look carefully on the Element website, there are never any claims that it provides perfect forward secrecy. This is intentional, and unless they change their backup keys, it will continue to stay that way. As the issue is still currently open, I can only assume it is still currently an issue.
I’d like to add SimpleX to this list, as Matrix based messengers hemorrhage metadata, and Session doesn’t have perfect forward secrecy. Also, while the Matrix protocol technically supports perfect forward secrecy, Element does not currently use it.
Yet Another Call Blocker. Also available on F-Droid
1984.hosting is great, I’ve been using their service for a couple of years now. They’re based in Iceland (really strong privacy laws) and have options for crypto payment if you don’t want to reveal yourself through your payment method. As with all registrars, they’ll need an email address (or alias) to reach you at in case there’s a domain dispute, and while they also ask for address and phone number, they’ve never had me actually verify anything beyond the email. If you give a fake address and phone number, then you’ll just need to understand that if someone challenges your domain, it will be very difficult for you to prove ownership with fake details (not as if that’s likely to happen unless you’re allowing the site to be crawled by a search engine though). I only have a domain through them, not a hosted webserver, but they seem to have good options for hosting. I know that they handle Let’s Encrypt certs automatically for hosted sites, and they run off green energy (geothermal) if that matters to you.
Talking on the YouTube front, a trivially small donation will support them far more than watching ads ever could. Even something as small as $1/year is often far more than they would ever make from you in a year. As far as donations to developers go, it depends entirely on what you feel comfortable with. Most people who work on open source projects are unpaid volunteers, so it isn’t expected that you donate, but if you choose to do so it can be quite helpful to sustaining the project. If many people in the userbase were to make small donations, that would go a long way.
In reality, ads almost entirely benefit exploitative multi-billion dollar companies such as Google and Facebook, so my personal philosophy stands against them. I much prefer donating to people directly to cut out the exploitative middle-man.
I agree with this sentiment 100%, but I think it lacks some of the context that these are children we are talking about. They aren’t being educated on privacy or security; not by their schools, and certainly not by their parents. This generation is being raised to believe that everything they do and say needs to be posted online to social media, and their concept of privacy is virtually nonexistent. Couple that with the fact that most of them don’t have a personal computer, and it leads to great levels of negligence with regard to their use of technology, and most relevant to this discussion, their use of school computers. The children being surveiled and exploited by this software don’t have the education on it to understand why it is bad, or even that it is happening to begin with.
So while yes, they shouldn’t have private communications on school computers, they don’t have the context to understand that or independently come to that conclusion themselves, thus those private communications will happen nonetheless.