• Dojan@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    edit-2
    2 days ago

    “I wrote an email to Google to say, ‘you have access to my computer, is that right?’”, he added.

    lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they’re making use of these tech giant services. These tech giants have blatantly showed that they’re OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

    If you want to give it a run for its money, give it a novel problem that isn’t solved, and see what it comes up with.

    • DarkCloud@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 days ago

      Large Language companies weren’t even aware their data (which is so large they themselves have no idea what’s in it) had other languages.

      So the models suddenly knew how to speak other languages. The above story feels like those stories “Large Language Models are super intelligent! They’ve taught themselves French!” - no, mass surveillance and corporations being above the law taught them everything they know.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      2 days ago

      (…) If you want to give it a run for its money, give it a novel problem that isn’t solved, and see what it comes up with.

      You mean like searchers have done …

      ... in here : ?

      https://bturtel.substack.com/p/human-all-too-human
      For AI to learn something fundamentally new - something it cannot be taught by humans - it requires exploration and ground-truth feedback.
      .
      https://www.lightningrod.ai/
      We’re enabling self-play that learns directly from real world feedback.