• Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff

    • Deconceptualist@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Ok, maybe there’s a possibility someday with that approach. But that doesn’t reflect my understanding or (limited) experience with the major LLMs (ChatGPT, Gemini) out in the wild today. Right now they confidently advise ingesting poison because it’s grammatically sound and they found it on some BS Facebook post.

      If ML engineers can design an internal concept of what constitutes valid information (a hard problem for humans, let alone machines) maybe there’s hope.