Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.

Be a good motherfucker. Peace.

  • 0 Posts
  • 13 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

This is more about the car maker harvesting data, rather than just tracking the car. Car makers have been (quietly) building more tech into their cars to collect data for the purposes of selling it to third parties. It’s effectively the enshittification of cars.

https://www.abc.net.au/news/2024-02-09/toyota-car-brands-collecting-driver-data-privacy-concerns-laws/103443500



Unfortunately, any mobile data component likely to be integrated with something more integral to the car, like the entire entertainment/climate control interface, or something equally difficult/impossible to drive without.


I think OP is referring to the whole “connected cars” thing, which isn’t the same as GPS. Many cars nowadays have mobile data capabilities on and are, unbeknownst to the owner, sending all sorts of information to the car makers.

This isn’t just governments and government contractors collecting data for road use and tolling. It’s for-profit companies harvesting consumer data for their own purpose. OP is right to be paranoid.


Yep - same here. I use MAC auth on all of my SSIDs, as well as PEAP auth for the “user” SSID. The kids’ phones get dropped into a dedicated VLAN, so knowing their MAC addresses is key to that.

Yes, every time they change phones, they have to ask me to add it to the wireless for them, but I have my RADIUS database in Postgres - I can easily use Adminer to add/change/delete the relevant table entry with the correct MAC address.


Sit tight, I reckon. All the projects have been forked by one of the more involved contributors to SMT, so I doubt it’ll be long before we have rebranded versions, that’ll take an import of the settings from the original apps.


Reading the SimpleX overview, it seems the only way the carrier pigeon analogy is truly satisfied with is with a private server, correct?


Yeah, I get it, but there’s just no way at all to ensure 100% total anonymity like you’re talking about, while also using a 3rd party carriage service of some sort (eg. mobile network; internet, etc).

We should go back to carrier pigeons with encrypted notes. That way, the sender and recipient “metadata” is only known to themselves (and the pigeons).


Privacy and anonymity are different things.

The post office knows who I am, my address, and who sends mail to me. They even know who I send mail to, if I write my return sender details on the envelope. I am not anonymous.

But, if the person we use ciphers to encrypt our letters, and only the two of us can decrypt and read them, our communications can indeed be considered private.

There’s a fundamental difference.

Edit: to answer your crude (but funny) example, I have no expectation of anonymity when I walk into my toilet at home or the toilets at work. The very fact that I, as a man, walk into a stall rather than stand at the urinal, gives any of my colleagues washing their hands at the basin the reasonable confidence of knowing I am taking a shit.

The size of the shit, the faces I make, and the nature of the resulting product, however, are not know to anyone else except me. That’s the difference.


I think the commenter you’re replying to is supporting the point made further up. People aren’t using Signal for anonymity, because that’s not it’s advertised purpose. As we all (except the author of this article) know, its purpose is privacy.


Excellent question, and good that you’re asking.

Just about everything is virtualised on Proxmox, but that’s only something I started doing this year. Before that, just about everything was running in Docker containers on Raspberry Pis. But, the security remained the same - just the back-end services changed. That said, only a handful my services are available via the internet. For everything else, I use the permanently on Wireguard VPN connection from my phone, to access private services (including Pi-hole DNS resolution and SearxNG) when not at home.

Nginx Proxy Manager

To start with, everything (even internal-only services) is hosted behind a reverse proxy server - Nginx Proxy Manager (NPM). NPM ensures the all communications to my services is over SSL, using a free, automatically renewable SSL certificate with Lets Encrypt. Crucially, I have NPM configured to steer all traffic for any publicly available services through an authentication service called Authelia (next section).

NPM also means I have name portability for my services. For instance, I used to use Whoogle for my private search engine, but recently changed over to SearxNG. As all my browsers reached the search engine using the host search.mydomain.tld, I didn’t have to reconfigure all of them. I simply changed where NPM steered the traffic.

Authelia

Authelia has its own username/password database, or it can be configured to use an LDAP server. Authelia is one of a few single sign-on (SSO) services out there. Many others use one called Authentik. Either way, you need an SSO.

Crucially, SSO provides two factor authentication (2FA). 2FA is where a service will ask you for an additional something, after username and password, to prove who you are. This is often a timed one-time password (TOTP) - frequently a 6 digit time-limited password generated by an app on your phone. In my case, Authelia is configured to use Duo Mobile, which does a push notification to my phone, but also has the option of using a TOTP from the Duo Mobile app if push fails.

Network Segmentation

I don’t really use a DMZ as such any more. With the advent of better, virtualised firewalls (see below), I don’t really need to. Instead, all my Proxmox guests use a dedicated VLAN, making it very easy to identify and treat their traffic on my firewall. I have six VLANs setup:

  1. Myself/my wife
  2. Our kids
  3. Physical infrastructure (switches, Proxmox server, storage devices, etc)
  4. Proxmox guests
  5. Guest users
  6. IoT (usually untrusted IoT - Roomba vaccuums, etc)

These mean I can setup some good, broad firewall rules for each segment of my network to catch all traffic, then focus on specifics higher up the firewall rule-chain. Which leads me to…

Firewall

As always, how you firewall your traffic is key to success. I’ve virtualised my firewalling/routing on Proxmox, with an OPNsense VM. My Proxmox server has two physical network interfaces, with one of them being plugged in directly to my fibre internet, and presented only to the OPNsense VM. Unless someone figures out how to break out of virtual jail on that link, their only way in is via OPNsense.

Given the network segmentation above, the rest is just about how you craft your firewall rules. Generally speaking, firewalls use “first match” for evaluating firewall rules, meaning the first rule it hits that matches the traffic it’s evaluating is the rule it applies to that traffic.

For example, I block all IoT from internet access as my last rule for the IoT segment. I then add a few rules up top that allow traffic out for the IoT devices that can’t/don’t operate without internet - Roomba vacuums, for example.

Being specific about the known use cases on your network is difficult at first - it’s surprising how much “just works” without you specifically knowing about it. I spent a fair bit of time using the live logging feature on my firewall, analysing blocked traffic, to determine what else I needed to open to make sure things were working as expected.

As painful as it can be to do this, it’s critical to being able to sleep at night, knowing you’ve only created the tiniest pinholes required. That’s what firewall rules are - pinholes in an otherwise impenetrable brick wall protecting your network. But also a requirement for certain things to operate properly. The cool thing is, firewall rules are directional (eg. something coming in to the network, or something leaving the network), so these pinholes aren’t a two-way street, if you don’t need them to be.

Additional thoughts

Ultimately, what helped a lot was mapping this out on paper first. Nothing beats having a plan to refer back to, when you’re in the middle of building/changing a bunch of network stuff. It centres your thoughts and reminds you of the prize, when all you want to do is unpick it all and go back to that shitty wireless internet router your ISP gave you.

Not sure about your circumstances, but I did a lot of my work in stages, often late at night, when the kids were in bed. Trying doing open heart surgery on your internet access with teenagers in the house!


Yeah, absolutely. I forgot to mention that I use Wireguard and Tasker so that, when I’m travelling, only the backups I want to sync over 5G/remotely are sync’d over. The rest can usually wait until I get back home.

  • Syncthing syncs a parent backups folder on my phone
  • Wireguard keeps me permanently tethered to the home network (for Pi-hole and searxng private search engine, which goes out via Mullvad VPN @ home)
  • Tasker keeps the large and/or unnecessary backup files out of Syncthing’s view when I’m not on the home network

I use SyncThing to get the backups I want over to my main computer, then rclone to encrypt them onto remote cloud storage. In my case, I use S3, but rclone supports heaps of cloud remotes.