Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 2 Posts
  • 100 Comments
Joined 29 days ago
cake
Cake day: July 17th, 2025

help-circle
  • Not in the traditional sense, but I have a pet theory about the continuation of consciousness.

    You can only experience being, not not-being, so even if your consciousness went dark for a million years before being “reincarnated,” there would be no gap from the perspective of your subjective experience. You can only go from having one experience to having another. Nothingness can’t be experienced.






  • It is a big part of the issue, but as Lemmy clearly demonstrates, that issue doesn’t go away even when you remove the algorithm entirely.

    I see it a lot like driving cars - no matter how much better and safer we make them, accidents will still happen as long as there’s an ape behind the wheel, and probably even after that. That’s not to say things can’t be improved - they definitely can - but I don’t think it can ever be “fixed,” because the problem isn’t it - it’s us. You can’t fix humans by tweaking the code on social media.







  • You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like “you’ve caught one” doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as “proof” without actual verification.

    And then there’s the classic toupee fallacy: “All toupees look fake - I’ve never seen one that didn’t.” That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.







  • I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

    But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

    So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.