The Age rating is who can use the App, not how long it’s been up.
The Age rating is who can use the App, not how long it’s been up.
YouTube will actually take action and has done in most instances. I won’t say they’re the fastest but they do kick people off the platform if they deem them high risk.
I don’t understand the comments suggesting this is “guilty by proxy”. These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.
Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?
To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.
“But Freedom of Speech!”
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don’t deserve to have that speech. Sorry, you’ve violated the social contract and those people’s blood is on your hands.
I’m talking less about the products and more about Linus’s reviewing practices. We saw this in the watercooler debacle. He half-asses reviews and blames the product when he’s the one messing up.
Not even that. It’s that his review isn’t an objective assessment of the product because he stands to financially benefit from Framework doing well. He’s worse than a hypocrite, he’s a shill.
Yeah I suppose that’s what’s happening with cops where they have to keep lowering standards otherwise they wouldn’t have so many people becoming cops rather than raising the quality of education to make them better. Instead they just make it easier for shitty people to become cops.
They may not qualify in that moment so it’s necessary to educate them until they can qualify. If you don’t know how to drive, go back to taking driving lessons until you can pass the test.
Netflix is full of reptiles who don’t care to offer a better service. All they want is enough market share to strongarm consumers into giving them more money.
I’m not sure what original comment you mean.
The reason for my comment was that the comment I replied to wasn’t addressing what the comment you were replying to, brought up. They bring up issues with using Google now has worsened because of LLMs and your reply said “well it’s useful for me in software development”. Like I said, that’s good for you but doesn’t address actual issues.
That’s good for you, however, content generation from these models has still polluted the internet and using Google’s Image search is impossible.
Whenever some dipshit responds to me with “you’re talking about AGI, this is AI”, my only reply is fuck right off.
I’ve just done the dance already and I’m tired of their watered-down attempts at bringing human complexity down to a level that makes their chat bots seem smart.
I don’t need a theory for this, you’re being highly reductive by focusing on a few features of human communication.
What research? These bots aren’t that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can’t evolve beyond what it is.
There’s no way these chatbots are capable of evolving into Ultron. That’s like saying a toaster is capable of nuclear fusion.
Sounds like a great car! It does seem like something’s wrong with the battery so a replacement is in order.
From the replies I’ve been getting, I think so.
My mum’s 2019 Toyota Yaris has to have its engine run every few days or the battery dies from just sitting on the driveway. It could be a faulty car battery but considering this car isn’t even that old and has barely driven 30k miles, it’s not doing so great. I discovered yesterday that my EV charges better after I’ve driven it around and the battery’s warmed up a bit. The car goes a bit haywire when you cold start so it seems like it needs some prep time before a drive.
The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.
Even the Wayback Machine has limits to what is available.