When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

  • geneva_convenience@lemmy.mlOP
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    23 hours ago

    Also from the article:

    An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.

    “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

    The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.

  • Bronstein_Tardigrade@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    20 hours ago

    Sounds like Reuters is the media spreading propaganda to get people behind internet age restriction. The story contains a “child-like” elderly man then adds META’s cavalier attitude to AI and children tangent. I smell a rat. I expect more and more stories to pop up followed by Congress declaring they “must save the children” with a UK-like age restriction surveillance bill.

  • Markaos@discuss.tchncs.de
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    23 hours ago

    OK, so the whole LLM chatbot arranging dates with people thing is obviously problematic, but this person simply tripped and fell, and the headline vaguely implies that the chatbot is responsible for his death. That seems a bit clickbaity - if it was a real person and they were actually waiting to meet at the agreed upon address, the outcome would be the same.

    • geneva_convenience@lemmy.mlOP
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      23 hours ago

      True but it’s deceiving an elderly man and the death was what Reuters decided to focus on. I found the part about the 200 pages of romantic interactions with minors to be significantly more disturbing.

      The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.

      • Markaos@discuss.tchncs.de
        link
        fedilink
        arrow-up
        6
        ·
        15 hours ago

        I agree, the fact that Meta considers 13 year olds being able to have romantic chats with chatbots to be perfectly fine is disturbing and IMHO the main newsworthy thing here.

        However there is no mention of “200 pages of romantic interactions with minors” in the article - that is the whole chatbot guidelines document. Still, it including such things shows how shitty Meta is as a company.