Bonus issue:
This one is a little bit less obvious
There have been so many people filing AI generated security vulnerabilities
I wonder if they made chat gpt use an unnatural amount of emojis just to make it easier to spot
People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?
Why do LLMs obsess over making numbered lists? They seem to do that constantly.
- Honestly I don’t know
Oh, I can help! 🎉
- computers like lists, they organize things.
- itemized things are better when linked! 🔗
- I hate myself a little for writing this out 😐
My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.
That’s not a bad theory especially since newer models don’t do it as often
Well they are computers…
When your repository is on Facebook.
Lol, my brain is like, nope, I’m not even trying to read that.
I think I lost a few brain cells reading it all the way through.
Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have any words for this.
aby
Checks out
god damn it i can’t type lmao
The emoji littering in fastapi’s documentation actually drove me away from using it.
I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves
They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.