Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
This isn’t true, provided that their dataset is large enough. The models are stochastic, and with a large enough number of parameters and a large enough training set, can generate truly unique content. For example, I strongly doubt you’d be able to find anything remotely resembling the following anywhere, ever (look up what the movie is about, and watch it, to understand the absurdity of my request), and yet it was generated by ChatGPT:
If you read interviews from the development of these models, you’ll see the creators saying what can be clear from the above link: With a large enough training set, these models start to learn something about the organization of language itself, and how to generate novel content.
The model architecture that these things are based on tries to replicate how our brains work, and the process by which they learn language isn’t unlike how we learn language.
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I know how AI works inside. AI isn’t going to completely replace such thing, yes, but it’ll also be the end of said cheap Indian call centers.
Who also don’t have the information or data that I need.
It isn’t going to completely replace whole business departments, only 90% of them, right now.
In five years it’s going to be 100%.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
Check out this recent paper that finds some evidence that LLMs aren’t just stochastic parrots. They actually develop internal models of things.
This isn’t true, provided that their dataset is large enough. The models are stochastic, and with a large enough number of parameters and a large enough training set, can generate truly unique content. For example, I strongly doubt you’d be able to find anything remotely resembling the following anywhere, ever (look up what the movie is about, and watch it, to understand the absurdity of my request), and yet it was generated by ChatGPT:
https://chat.openai.com/share/803f2633-8682-45f0-b999-3bede5c02c21
If you read interviews from the development of these models, you’ll see the creators saying what can be clear from the above link: With a large enough training set, these models start to learn something about the organization of language itself, and how to generate novel content.
The model architecture that these things are based on tries to replicate how our brains work, and the process by which they learn language isn’t unlike how we learn language.
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.