It’s impressive how well ChatGPT hallucinates citations.
I was asking it about a field of law I happen to be quite aware of (as a layman), and it came up with entire sections of laws that didn’t exist to support its conclusions.
Large Language Models like ChatGPT are in my view verisimilitude engines. Verisimilitude is the appearance of being true or real. You’ll note, however, that it is not being true or real, simply appearing so.
It’s trying to make an answer that looks right. If it happens to know the actual answer then that’s what it’ll go with, but if it doesn’t, it’ll go with what a correct answer might statistically look like. For fields with actual right and wrong answers like law and science and technology, its tendency to make things up is really harmful if the person using the tool doesn’t know it will lie.
Or, they’ve done it before and gotten away with it.
It’s impressive how well ChatGPT hallucinates citations.
I was asking it about a field of law I happen to be quite aware of (as a layman), and it came up with entire sections of laws that didn’t exist to support its conclusions.
Large Language Models like ChatGPT are in my view verisimilitude engines. Verisimilitude is the appearance of being true or real. You’ll note, however, that it is not being true or real, simply appearing so.
It’s trying to make an answer that looks right. If it happens to know the actual answer then that’s what it’ll go with, but if it doesn’t, it’ll go with what a correct answer might statistically look like. For fields with actual right and wrong answers like law and science and technology, its tendency to make things up is really harmful if the person using the tool doesn’t know it will lie.