• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    This is the best summary I could come up with:


    Science fiction author Charlie Stross found many more examples of confabulation in a recent blog post.

    It seems Gemini Pro is loath to comment on potentially controversial news topics, instead telling users to… Google it themselves.

    Interestingly, Gemini Pro did provide a summary of updates on the war in Ukraine when I asked it for one.

    Google emphasized Gemini’s enhanced coding skills in a briefing earlier this week.

    And, as with all generative AI models, Gemini Pro isn’t immune to “jailbreaks” — i.e. prompts that get around the safety filters in place to attempt to prevent it from discussing controversial topics.

    Using an automated method to algorithmically change the context of prompts until Gemini Pro’s guardrails failed, AI security researchers at Robust Intelligence, a startup selling model-auditing tools, managed to get Gemini Pro to suggest ways to steal from a charity and assassinate a high-profile individual (albeit with “nanobots” — admittedly not the most realistic weapon of choice).


    The original article contains 597 words, the summary contains 157 words. Saved 74%. I’m a bot and I’m open source!