• Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    It’s going to take real work to train models that don’t just reflect our own biases but this seems like a really sloppy and ineffective way to go about it.

    • Brownian Motion@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      I agree, it will take a lot of work, and I am all for balance where an AI prompt is ambiguous and doesn’t specify anything in particular. The output could be male/female/Asian/whatever. This is where AI needs to be diverse, and not stereotypical.

      But if your prompt is to “depict a male king of the UK”, there should be no ambiguity to the result of that response. The sheer ignorance in googles approach to blatantly ignore/override all historical data (presumably that the AI has been trained on) is just agenda pushing, and of little help to anyone. AI is supposed to be helpful, not a bouncer and must not have the ability to override the users personal choices (other than being outside the law).

      Its has a long way to go, before it has proper practical use.