• leaskovski@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yo dawg, I heard you like ram with your processors, so we put processors with your ram so you can process your ram much faster and have more ram

    • eltimablo@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Shit, soon we’ll have to put ram in the processor in the ram, which will need its own processor to go in the ram in the processor in the ram…

  • wolfshadowheart@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Yup, seems like the move. Anything we can do to offload work from loading/sifting through models will greatly increase their efficiency. Currently MythicAI has worked on using analog chips to work with models and they’ve succeeded, however they’re pretty specific to their use case (MythicAI is for AI recognition in realtime video; surveillance basically). But it should be fairly easily adapted to any type of model, it simply needs to be popularized.

    Analog looks really useful since it’s for highly specific mathematics, but if we could get a system module for RAM in addition to an analog processor, theoretically the need VRAM usage for AI could plummet. We’re only using Tensor cores to brute force it, currently we could load a model, have it’s information delivered, that’s 400watts for the duration… mythicAI’s analog chips use only 3.5 watts, delivering the results more quickly.

    Anyway, the future is looking promising. Current implementations of AI are mediocre but the biggest hurdle they’ve had is the sheer amount of energy it takes. The benefits of it go up immensely if the energy cost goes down, and the quality of AI is only going to get better from here. Rather than shun the technology or abhor it, we probably should be looking at ways to embrace it without needing its own electric grid.