An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.
deleted by creator
This is where I get to lol and say you don’t understand AI.
When a kernel privesc vuln 0day is found and reported or caught in a dump, it gets fixed. Unless it was improperly fixed, that particular vulnerability can’t be exploited again.
But when it comes to AI, a GAN’s job is to take the ‘vulnerability’ that was ‘fixed’ and train on it to exploit it again.
And again.
And again.
And again.
https://en.m.wikipedia.org/wiki/Generative_adversarial_network
deleted by creator
…
https://github.com/search?q=generative+adversarial+network&type=repositories&s=updated&o=desc
Do you remember last year when OpenAI pulled its own AI detection tool because it was performing so poorly?
I forgot about that, but this article from 6 months ago comparing the effectiveness of major AI detection tools reminded me.