• 0 Posts
  • 1 Comment
Joined 1 年前
cake
Cake day: 2023年6月10日

help-circle
  • @marmo7ade

    There are at least 2 far more likely causes for this than politics: source bias and PR considerations.

    Getting better and more accurate responses when talking about Europe or other English speaking countries while asking in English should be expected. When training any LLM model that’s supposed to work with English, you train it on English sources. English sources have a lot more works talking about European countries than African countries. Since there’s more sources talking about Europe, it generates better responses to prompts involving Europe.

    The most likely explanation though over politics is that companies want to make money. If ChatGPT or any other AI says a bunch of racist stuff it creates PR problems, and PR problems can cause investors to bail. Since LLMs don’t really understand what they’re saying, the developers can’t take a very nuanced approach to it and we’re left with blunt bans. If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.

    @Razgriz @breadsmasher