I’m just sitting here wondering where the fucking cabbage came from.
Whatever. I’m pretty safe, I do IT, and LLMs are interesting, but they’re shit at plugging in stuff like power cables and ethernet, so I’m safe for now.
When the “AI” can set up the computers, from unboxing to a fully working desktop, I’ll probably be dead, so I equally won’t care. It’s neat, but hardly a replacement for a person at the moment. I see the biggest opportunity with AI as personal assistants, reminding you of shit, helping you draft emails and messages, etc… In the end you have to more or less sign off on it and submit that stuff. AI just does the finicky little stuff that all of us have to do all the time and not much else.
… This comment was not generated, in whole or in part, by AI.
It was probably trained on this puzzle thousands of times. There are problem solving benchmarks for LLMs, and LLMs are probably over-trained on puzzles to get their scores up. When asked to solve a “puzzle” that looks very similar to a puzzle it’s seen many times before, it’s improbable that the solution is simple, so it gets tripped up. Kinda like people getting tripped up by “trick questions.”
I’m here, I’m not young, I’m tech inclined.
Smart? 🤷♂️
I’m just sitting here wondering where the fucking cabbage came from.
Whatever. I’m pretty safe, I do IT, and LLMs are interesting, but they’re shit at plugging in stuff like power cables and ethernet, so I’m safe for now.
When the “AI” can set up the computers, from unboxing to a fully working desktop, I’ll probably be dead, so I equally won’t care. It’s neat, but hardly a replacement for a person at the moment. I see the biggest opportunity with AI as personal assistants, reminding you of shit, helping you draft emails and messages, etc… In the end you have to more or less sign off on it and submit that stuff. AI just does the finicky little stuff that all of us have to do all the time and not much else.
… This comment was not generated, in whole or in part, by AI.
The set up is similar this well-known puzzle: https://en.wikipedia.org/wiki/Wolf,_goat_and_cabbage_problem
It was probably trained on this puzzle thousands of times. There are problem solving benchmarks for LLMs, and LLMs are probably over-trained on puzzles to get their scores up. When asked to solve a “puzzle” that looks very similar to a puzzle it’s seen many times before, it’s improbable that the solution is simple, so it gets tripped up. Kinda like people getting tripped up by “trick questions.”
Mustard. The cabbage came from mustard.