AI is overhyped and unreliable -Goldman Sachs
https://www.404media.co/goldman-sachs-ai-is-overhyped-wildly-expensive-and-unreliable/
“Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks”
I was fully on board until, like, a year ago. But the more I used it, the more obviously it came undone.
I initially felt like it could really help with programming. And it looked like it, too - when you fed it toy problems where you don’t really care about how the solution looks, as long as it’s somewhat OK. But once you start giving it constraints that stem from a real project, it just stops being useful. It ignores constraints (use this library, do not make additional queries, …), and when you point out its mistake and ask it to to better it goes “oh, sorry! Here, let me do the same thing again, with the same error!”.
If you’re working in a less common language, it even dreams up non-existing syntax.
Even the one thing it should be good at - plain old language - it sucks ass at. It’s become so easy to spot LLM garbage, just due to its style.
Worse, asking it to proofread a text for spelling and grammar mistakes, but to explicitly do not change the wording or style, there’s about a 50/50 chance it will either
I could honestly go on and on, but what it boils down to is: it is able to string together words that make it sound like it knows what it is doing, but it is just that, a facade. And it looks like for more and more people, the spell is finally breaking.