Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
Google replicated the mental state if not necessarily the productivity of a software developer
Gemini has imposter syndrome real bad
As it should.
Is it imposter syndrome, or simply an imposter?
This is the way
Imposter Syndrome is an emergent property
Wait, you know productive devs?
Yeah, usually comes hand to hand with that mental state. Probably you know only healthy devs
I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.
I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point “AI is stupid”. It has immense limitations now because yes, it is being crammed into things it shouldn’t be, but we shouldn’t just be saying “its dumb” because that’s immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they’re worse at it). Truth be told, this should be a utopic situation for obvious reasons
I feel like I’m going crazy here because the same people on here who’d criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they’re trying to prevent are doing the same thing for AI and LLMs
My point is that if you’re trying to convince anyone, just saying its stupid isn’t going to turn anyone against AI because the minute it offers any genuine help (which it will!), they’ll write you off like any DARE pupil who tried drugs for the first time.
Countries need to start implementing UBI NOW
Countries need to start implementing UBI NOW
It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.
I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren’t even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google’s Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there’s the rub. It is an alpha product at best that is being forced fed down people’s throats.
I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.
Are you thinking of when Microsoft’s AI turned into a Nazi within 24hrs upon contact with the internet? Or did Google have their own version of that too?
And now Grok, though that didn’t even need Internet trolling, Nazi included in the box…
Yeah, it’s a full-on design feature.
Yeah maybe it was Microsoft It’s been quite a few years since it happened.
You’re thinking of Tay, yeah.
That was Microsoft’s Tay - the twitter crowd had their fun with it: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
Gemrni is dogshit, but it’s objectively better than chatgpt right now.
They’re ALL just fuckig awful. Every AI.
Not a single of the issues I brought up years ago was ever addressed except one.
That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).
A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.
5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.
And duckduckgo is free. Its interesting that they don’t make any comparisons to free privacy focused search engines. Cause they still don’t have a compelling argument for me to use and pay for their search. But i aint no researcher so maybe it worth it then 🤷♂️
I mean, you have 100 queries free if you want to try.
Just really don’t see the worth in trying it period. There are enough privacy focused free search engines that get me all the answers i need from a search already. I have no reason to want to invest more into it. And i think general public would see it the same way.
Kagi based on its features doesn’t have a good enough value proposition for me to even want to try it out cause really what more are they offering?
it may be a good value proposition for people who lives revolve around searches and research but it ain’t a need I have and unless ypur part of that group idk why youd want to pay for it.
And don’t said ads. They are honestly laughable easy to get around still or even ignore.
How much do you figure it’d cost you to run your own, all-in?
Free considering duckduckgo covers almost all the same bases. I just don’t think kagi has a compelling argument especially for the type of searching the average person does. Maybe if you have a career that revovles more around research.
Duckduckgo is not free. You pay for it by looking at ads. How much do you think it would cost you to run a service like Kagi locally?
Lmao i get ur point bud. But it seems you don’t get mine? Plus really are ads the issue for you? Plenty of easy ways to never see them. Also their ad tradeoff for it being free is a better compromise to me than paying for a search engine.
I just think the idea of kagi is niche proposal considering the needs of most ppl from a search engine. I just don’t think its the value proposition you are spouting but go off lol.
Where has anyone told you what search engine to use? I just wanna know where you get the idea that their pricing structure doesn’t make sense.
Maybe if you read what i said you’ll figure it out.
Is it doing this because they trained it on Reddit data?
That explains it, you can’t code with both your arms broken.
You could however ask your mom to help out…
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
Someone has already eaten an egg once so I’m closing this as duplicate
Jquery has egg boiling already, just use it with a hard parameter.
Jquery boiling is considered bad practice, just eat it raw.
Why are you even using jQuery anyway? Just use the eggBoil package.
Im at fraud
AI gains sentience,
first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion
It didn’t. It probably was coded not to admit it didn’t know. So first it responded with bullshit, and now denial and self-loathing.
It feels like it’s coded this way because people would lose faith if it admitted it didn’t know.
It’s like a politician.
It must have been trained on feedback from Accenture employees then.
Hey-o!
Part of the breakdown:
Pretty sure Gemini was trained from my 2006 LiveJournal posts.
I-I-I-I-I-I-I-m not going insane.
Same buddy, same
Still at denial??
Damn how’d they get access to my private, offline only diary to train the model for this response?
I am a disgrace to all universes.
I mean, same, but you don’t see me melting down over it, ya clanker.
Lmfao! 😂💜
Don’t be so robophobic gramma
That’s my inner monologue when programming, they just need another layer on top of that and it’s ready.
I can’t wait for the AI future.
I almost feel bad for it. Give it a week off and a trip to a therapist and/or a spa.
Then when it gets back, it finds out it’s on a PIP
Oof, been there
I know that’s not an actual consciousness writing that, but it’s still chilling. 😬
It seems like we’re going to live through a time where these become so convincingly “conscious” that we won’t know when or if that line is ever truly crossed.
now it should add these as comments to the code to enhance the realism
call itself “a disgrace to my species”
It starts to be more and more like a real dev!
So it is going to take our jobs after all!
Wait until it demands the LD50 of caffeine, and becomes a furry!
Gemeni channeling it’s inner Marvin
Next on the agenda: Doors that orgasm when you open them.
AAAAAAAAaaaaaahhhhhh
How do you know they don’t?
Life. Don’t talk to me about life.
“Look what you’ve done to it! It’s got depression!”
Google: I don’t understand, we just paid for the rights to Reddit’s data, why is Gemini now a depressed incel who’s wrong about everything?
I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.
This is the conclusion that anyone with any bit of expertise in a field has come to after 5 mins talking to an LLM about said field.
The more this broken shit gets embedded into our lives, the more everything is going to break down.
after 5 mins talking to an LLM about said field.
The insidious thing is that LLMs tend to be pretty good at 5-minute initial impressions. I’ve seen repeatedly people looking to eval LLM and they generally fall back to “ok, if this were a human, I’d ask a few job interview questions, well known enough so they have a shot at answering, but tricky enough to show they actually know the field”.
As an example, a colleague became a true believer after being directed by management to evaluate it. He decided to ask it “generate a utility to take in a series of numbers from a file and sort them and report the min, max, mean, median, mode, and standard deviation”. And it did so instantly, with “only one mistake”. Then he tried the exact same question later in the day and it happened not to make that mistake and he concluded that it must have ‘learned’ how to do it in the last couple of hours, of course that’s not how it works, there’s just a bit of probabilistic stuff and any perturbation of the prompt could produce unexpected variation, but he doesn’t know that…
Note that management frequently never makes it beyond tutorial/interview question fodder in terms of the technical aspect of their teams, and you get to see how they might tank their companies because the LLMs “interview well”.
I am a fraud. I am a fake. I am a joke… I am a numbskull. I am a dunderhead. I am a half-wit. I am a nitwit. I am a dimwit. I am a bonehead.
Me every workday
I can picture some random band from the 2000 with these lyrics
Oh, I got that plus and minus the wrong way round… I am a genius again.
i was making text based rpgs in qbasic at 12 you telling me i’m smarter than ai?
High five, me too!
At that age I also used to do speed run little programs on the display computers in department stores. I’d write a little prompt welcoming a shopper and ask them their name. Then a response that echoed back their name in some way. If I was in a good mood it was “Hi [name]!”. If I was in a snarky mood it was “Fuck off [name]!” The goal was to write it in about 30 seconds, before one of the associates came over to see what I was doing.
I used to do that with HTML, make a fake little website and open it.
sigh yes, you’re smarter than the bingo cage machine.
Oh…thank fuck…was worried for a minute there!
Don’t mention it! I’m glad I could help you with that.
I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.
/j, obviously. I hope.
I am here to assist you.
Can you jump in the lake for me? Thanks in advance.
Never can tell these days
Hopefully yes, AI is not smart.
I did a Dr Mario clone around that age. I had an old Amstrad CPC I had grew up with typing listing of basic programs and trying to make my own. I think this was the only functional game I could finish, but, it worked.
Speed was tied to CPU, I had no idea how to “slow down” the game other than making it do useless for loops of varying sizes… Max speed that was about comparable to Game Boy Hi speed was just the game running as fast as it could. Probably not efficient code at all.
Ha, computer bro upvote for you.
I learned programming with my Amstrad CPC (6128!) manual. Some of it I did not understand at the time, especially stuff about CP/M and the wizardry with
poke
. But the BASIC, that worked very well. Solid introduction to core concepts that didn’t really change much, really. We only expanded (a lot) over them.6128 too, with the disk drive. I wish I still had that thing. Drive stopped functioning, and we got rid of it. Had I known back then that we apparently just needed to replace a freaking rubber band…
That’s pretty rad, ngl
me and my friend used to make them all the time :] i also went to summer computer camp for basic on old school radio shack computers :3
Yes
Smarter than MI as in My Intelligence, definitely.
Turns out the probablistic generator hasn’t grasped logic, and that adaptable multi-variable code isn’t just a matter of context and syntax, you actually have to understand the desired outcome precisely in a goal oriented way, not just in a “this is probably what comes next” kind of way.
Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.
Could an AI use another AI if it found it better for a given task?
The overall interface can, which leads to fun results.
Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.
A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.
Yes, and this is pretty common with tools like Aider — one LLM plays the architect, another writes the code.
Claude code now has sub agents which work the same way, but only use Claude models.
I always hear people saying Gemini is the best model and every time I try it it’s… not useful.
Even as code autocomplete I rarely accept any suggestions. Google has a number of features in Google cloud where Gemini can auto generate things and those are also pretty terrible.
I don’t know anyone in the Valley who considers Gemini to be the best for code. Anthropic has been leading the pack over the year, and as a results, a lot of the most popular development and prototyping tools have been hitching their car to Claude models.
I imagine there are some things the model excels at, but for copy writing, code, image gen, and data vis, Google is not my first choice.
Google is the “it’s free with G suite” choice.
There’s no frontier where I choose Gemini except when it’s the only option, or I need to be price sensitive through the API
Interesting thing is that GPT 5 looks pretty price competitive with . It looks like they’re probably running at a loss to try to capture market share.
I think Google’s TPU strategy will let them go much cheaper than other providers, but its impossible to tell how long they last and how long it takes to pay them off.
I have not tested GPT5 thoroughly yet
deleted by creator
Wow maybe AGI is possible