Lol… I just read the paper, and Dr Zhao actually just wrote a research paper on why it’s actually legally OK to use images to train AI. Hear me out…
He changes the ‘style’ of input images to corrupt the ability of image generators to mimic them, and even shows that the super majority of artists even can’t tell when this happens with his program, Glaze… Style is explicitly not copywriteable in US case law, and so he just provided evidence that the data OpenAI and others use to generate images is transformative which would legally mean that it falls under fair use.
No idea if this would actually get argued in court, but it certainly doesn’t support the idea that these image generators are stealing actual artwork.
Immediately, probably not. Privacy is one of those things where when you really need it, you can’t get it… unless you already have it.
Also, it’s not like you know the motivations of all 7 billion people on earth. If you’re out in the open, it just makes it easy for the lazy to find you.
I can get behind using a VPN, a phone with Graphene or Calyx, adblocker, user agent switcher, librewolf, and stuff… you give up some convenience for privacy, but it’s not overbearing. Tor, however, isn’t exactly useful as a daily driver.
So is there a visible benefit? Hopefully not. If you’re doing it right, you’ll just live a normal life and not be bothered.