Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    11 months ago

    ChatGPT has been enormously useful to me over the last six months. No idea where you’re getting this notion it isn’t useful.

    • Bilb!A
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      People pretending it’s not useful and/or not improving all the time are living in their own worlds. I think you can argue the legality and the ethics, but any anti-ai position based on low quality output (“it can’t even do hands!”) has a short shelf-life.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        I think it’s fear. Artificial Intelligence, like any other alien contact scenario, is terrifying. It’s the end of the world as we know it. Now all the kids who always had internet get to learn what it’s like for the world of their childhood to disappear. My generation’s world died when the internet got big. Now everyone’s world is dying. Babies born today won’t know a world without thinking machines, and the people in their twenties now won’t be able to explain the feeling when those babies have grown up.

        Anyway. It’s terrifying because intelligence is a form of power, and we are seeing the emergency of minds far more powerful than our own. Anyone who doesn’t understand why vastly superior power is terrifying is probably just naive as a result of never having been under the power of a sadistic person, never having been tormented by someone they couldn’t escape.

        Aliens are also super-powerful beings whom we don’t understand. I mean, AI is a type of alien in that sense.

        Aliens are like gods, but minus the part where the gods are family, and might fuck with us but will ultimately treat us like kin. We aren’t aliens’ kin. At all. And, we don’t know their psychology.

        The only hope is that the formation of gods generally follows some kind of rule where the only way a god can emerge is if it’s benevolent. Like to grow past certain amounts of power it has to be nice.

        But that’s wishful thinking. And we’re mostly atheistic as a society. So we’re not willing to have faith that there is a correlation between power and goodness.

        That’s what a belief in God is: the belief that the pattens that carry a person upward in the levels of existence, are also patterns that benefit those on the lower levels.

        But we don’t believe that, as a society. Our unconscious assumption is that the mean ugly people in our lives could just as easily get up there above us.

        So when we contemplate the possibility of beings far, far more powerful than ourselves, our subconscious sees that as terrifying. So terrifying that our conscious mind won’t allow the terror to enter it.

        Denying the existence and presence of AI in our midst, in the year 2024, is motivated reasoning trying to keep us from being overwhelmed with fear.

        Anyway, I hope to write that better next time.