These LLMs are going to keep hallucinating… I.e. acting like chat bots, until everyone understands not to trust them. Like uncle Jimmy who makes shit up all the time
Why so negative about large language models?
No issue with the model, just that people attribute intelligence to them, when they’re just chat bots. And they run into these fun situations
50 emails/day x 5 days x 40amonth=10,000 a month in lost sales—and that was only from people who cared enough to complain.
Multiply that by 20. Because roughly, for each complainer, you’ll get 19 people simply thinking “you know what, screw it” and never voicing their discontent. 200k a month in lost sales.
And… frankly? They deserve the losses.
Pro-tip: you should “trust” the output of a large language model less than you’d trust the village idiot. Even when the later is drunk.