• retrospectology@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    edit-2
    5 months ago

    AI has a bad name because it is being pursued incredibly recklessly and any and all criticism is being waved away by its cult-like supporters.

    Fascists taking up use of AI is one of the biggest threats it presents and people are even trying to shrugg that off. It’s insanity the way people simply will not acknowledge the massive pitfalls that AI represents.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      I think that would be online spaces in general where anything that goes against the grain gets shooed away by the zeitgeist of the specific space. I wish there were more places where we can all put criticism into account, generative AI included. Even r/aiwars, where it’s supposed to be a place for discussion about both the good and bad of AI, can come across as incredibly one-sided at times.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      11
      ·
      edit-2
      4 months ago

      As someone who has sometimes been accused of being an AI cultist, I agree that it’s being pursued far too recklessly, but the people who I argue with don’t usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI “aren’t real minds” and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don’t know how much this helps you, but if you didn’t know before, you know now.

      Edit:y’all’re seriously downvoting me for pointing out that a question is unanswerable when it’s been known to be such for centuries. Read a fucking philosophy book ffs.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        We do know we created them. The AI people are currently freaking out about does a single thing, predict text. You can think of LLMs like a hyper advanced auto correct. The main thing that’s exciting is these produce text that looks as if a human wrote it. That’s all. They don’t have any memory, or any persistence whatsoever. That’s why we have to feed it a bunch of the previous text (context) in a “conversation” in order for it to work as convincingly as it does. It cannot and does not remember what you say

        • Leate_Wonceslace@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          5 months ago

          You’re making the implicit assumption that an entity that lacks memory necessarily does not have any internal experience, which is not something that we can know or test for. Furthermore, there’s no law of the universe that states that something created by humans cannot have an internal experience; we have no way of knowing whether something we create has an internal experience or not.

          You can think of LLMs like a hyper advanced auto correct.

          Yes; this is functionally what LLMs are, but the scope of the discussion extends beyond LLMs, and doesn’t address my core complaint about how these arguments are being conducted. Generally though maybe not universally, if a core premise of your argument is “x works differently than humans” your argument won’t be valid. I’m not currently making a claim of substance, I’m critiquing a tactic being used and pointing out that it among other things relies on a bad foundation.

          If you want to know another way to make the argument, consider focusing on the practical implications of how current and future technologies given current and hypothetical ways of structuring society. For example: the fact that generative AI (being a novel form of automation) making images will lead to the displacement of Artists, the fact that art is being used without consent to train these models which are then being used for profit, etc.

            • Leate_Wonceslace@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              Not “by my definitions” by the simple fact that we can’t test for it. Technically, no one knows if any other individual has internal experiences or not. I know for a fact that my sensorium provides me data, and if I assume that data is at all accurate, I can be reasonably confident that other entities that look and behave similarly to me exist. However, I can’t verify that any of them have internal experiences the way I do. Sure, it’s reasonable to expect that, so we can just add that to the pile of assumptions we’ve been working with so far without much issue. What about other animals, like dogs? They have the same computational substrate, and the same mechanism for making those computations. I think it’s reasonable to say animals probably have internal experiences, but I’ve met multiple people who insist they somehow know they don’t, and so animal abuse is a myth. Now if we assume animals have internal experiences, what about nematodes? Nematode brains are simple enough that you can run them on a computer. If animals have internal experiences, does that include nematodes, and if so does that mean the simulated Nematode brain has internal experiences? If a computer’s subroutine can have internal experiences, what about the computer?

              Do you now understand why and what I’m saying? Where’s the line drawn? As far as I can tell, the only honest answer is to admit ignorance.