• Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    6
    ·
    edit-2
    3 months ago

    So, you want an AI LLM trained to respond like a person from ~180 years ago, with their highly religious and cultural bias from a time so far removed from ours that you would feel offended by its answers, with no knowledge of anything from the past 100+ years? Would you be able to use such a thing in daily life?

    Consider that even school textbooks are copywrited, and people writing open source projects are sometimes offended by their OPEN SOURCE CODE being trained for AI, you basically cut away the ability for the AI model to learn basic human knowledge or even do the thing it’s actually “good” at if you took the full “no offense taken” approach.

    The other part of the problem is, legally speaking, making it where it is forbidden to train on copywrited data opens up a huge window for companies with aggressive copywrite protections to effectively end all fan works of something, or even forbid people from making things with even a hint that their concept was conceived based on their once vaguely hearing about or seeing a copywrited work. How do you legally prove you’ve never been exposed to, even briefly, and thus have never been influenced by something that’s memetically and culturally everywhere, for example?

    As for AI art and music, there are open source pd/cc only models out there, as I call them, “vegan models”. CommonCanvas, for instance. The problem with these models is the lack of subject material available (only 10 million images, which there are a lot more than 10 million things to look at in the world, before considering ways to combine them), and the lack of interest in doing the proper legwork to make sure the AI learns properly through good image tagging, which can take upwards of years to complete. Training AI is very expensive and time consuming (especially the captioning part, due to it being a human task!) and if you don’t have a literal supercomputer you can run for several months at tens of thousands of dollars per month, you aren’t going to make even a small model work in any reasonable amount of time. What makes the big art models good at what they do is both the size of the dataset and the captioning. You need a dataset in the billions.

    For example, if you have never seen any kind of cat before ever, and no one tells you what a cat looks like, and no one tells you how biology works, and you get a single image of a lion, which contains a side-on image, and you are told that is a cat, will you be able to draw it in every perspective angle? No, you won’t. You can guess and infer, but it may not be right. You have the advantage of many, many more data points to draw from in your mind, the human advantage. These AI models don’t have that. You want an AI to draw a lion from every perspective, you need to show it lion images from every perspective so it knows what it looks like.

    As for AI “tracing”, well, that’s not accurate either. AI models do not normally contain training image data in reproducible form in any way. They contain probability matrices of shapes and curves, which mathematically describe the probability of a certain shape in correlation with other concepts alongside it. Take a single one of these “neuron” matrices and graph it, and you get a mess of shapes and curves that vaguely resble a psychodellic abstract art of different parts of that concept… and sometimes other concepts too, because it can and often does use the same “neuron” for other, logically unrelated concepts, but make sense for something that is only interested in defining shapes.

    Most importantly, AI models do not use binary logic like most people are used to with computer logic. It is not a definitive yes/no on anything. It is a floating point number, a varying scale of “maybe”, which allows it to combine and be nuanced with concepts wothout being rigid. This is what makes the AI able to do more than be a tracing machine.

    Where this really comes to is the human factor, the primal fear of “the machine” or “something greater” being able to outcompete the human. Media has given us the concept of Rogue AI destroying civilization since the dawn of the machine age, and it is thoroughly engrained in our culture that smart machines = evil, even though we don’t yet have a reality that far. People forget how much support is required to keep a machine going. They don’t heal themselves or magically keep running forever.

    • mriormro@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      If the only way your product can be useful is by stealing other people’s work, well then it’s not a well made product.

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        Stealing implies taking something away from someone… if something is freely given, how can it be stolen?