• NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    ·
    edit-2
    7 months ago

    Laughable, how they put it.

    data protection agencies in 11 European countries – and those agencies, led by Ireland, telling the Facebook giant to scrap the slurp.

    They are making such a pathetic show about their own decision to observe the law.

    And this is a law that is clearly and openly readable. You don’t need legal experts to understand the basics, and you don’t need any agencies telling you that you must observe it.

    They are constantly giving the impression as if they were a gang of professional outlaws and only if somebody catches them redhanded, then they are able make a decision - one decision for one case, exceptipnally - to behave properly.

    • octopus_ink@lemmy.ml
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      7 months ago

      They are the worst of the worst, and I will never use an instance that voluntarily federates with Threads. I respect MS more than Meta, and that’s a pretty incredible feat on the part of Meta.

    • sunzu@kbin.run
      link
      fedilink
      arrow-up
      1
      arrow-down
      8
      ·
      7 months ago

      And sure as fuck corrupt Ireland hosting US big tech should NOT be the one leading anything privacy related… It is a charade IMHO

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      44
      ·
      7 months ago

      EU cultural values include resisting against corporations doing whatever they want with our data. Let’s see meta try to reflect those.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        6
        arrow-down
        22
        ·
        7 months ago

        So you want Meta’s AI to have values that don’t include resisting against corporations doing whatever they want with your data?

        This is a seriously double-edged sword here. The training data of these AIs is what gives these AIs their capabilities and biases.

        • brsrklf@jlai.lu
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          7 months ago

          Anyway, no matter from which parts of the world it’s trained, we’re talking about 2024 Facebook content. We’ve seen what Reddit does to an AI.

          Can’t wait for meta’s cultured AI to share its wisdom with us.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            arrow-down
            13
            ·
            edit-2
            7 months ago

            Reddit is actually extremely good for AI. It’s a vast trove of examples of people talking to each other.

            When it comes to factual data then there are better sources, sure, but factual data has never been the key deficiency of AI. We’ve long had search engines for that kind of thing. What AIs had trouble with was human interaction, which is what Reddit and Facebook are all about. These datasets train the AI to be able to communicate.

            If the Fediverse was larger we’d be a significant source of AI training material too. Would be surprised if it’s not being collected already.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                6
                ·
                7 months ago

                The “glue on pizza” thing wasn’t a result of the AI’s training, the AI was working fine. It was the search result that gave it a goofy answer to summarize.

                The problem here is that it seems people don’t really understand what goes into training an LLM or how the training data is used.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Training AI is not some noble endeavor that must be done no matter what. It’s a commercial grab that needs to balance utility with consumer rights.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        I would expect plenty of deeply-held values:

        Rule 34

        Shitposts

        CSAM

        Disingenuous partisan mis/disinformation

        Worst hot takes imaginable

  • dmtalon@infosec.pub
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 months ago

    Its crazy how much further ahead Europe is in Privacy Protection.

    All these companies need to be held responsible for what they do with our data, and what it costs them when they lose control of it. Either figure out how to safe guard it or suffer painful consequences. Or perhaps only store what’s necessary for us to interact.

    • Grippler@feddit.dk
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      7 months ago

      But then again, we also have pretty much every EU group pushing for super invasive chat control. It’s ridiculous how schizophrenic they are on the subject of digital privacy.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        7 months ago

        Yup, the EU isn’t a role model for the world or anything. They have some good laws, and those should be replicated elsewhere, but don’t assume that just because they got a few things right, that they don’t mess up in other really important ways.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          10
          ·
          7 months ago

          For some reason a lot of parts of Europe seem to want to elect hard right borderline neo Nazis. Many cases, not even borderline.

          God knows what the appeal is. Since the hard right and every particularly interested in protecting their own more interested in protecting their wallets. Not a concern that the vast majority of the populace are really going to empathise with.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              7 months ago

              That’s apparently a thing everywhere.

              I’m in the US, and people here just seem to be okay with the TSA, NSA, CBP, etc all going through your stuff. I was complaining about BS stoplight cameras on a trip to another state, and my parents and cousin seemed to want more of them, despite them largely just harassing law-abiding citizens by shortening yellow-light durations and ticketing people for pulling too far forward… They also seem interested in facial recognition in stores and whatnot.

              I don’t get it. If they did an ounce of research, they’d see that these don’t actually reduce crime or protect anyone, they just drive revenue and harass people. I mention “privacy” and they pull the “nothing to hide” argument.

              People seem to want their privacy violated. I just don’t get it.

      • sunzu@kbin.run
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        They are telling what they care about, take notice.

        I am once they get a local AI grfiter, they will change tune too.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        It’s not the same groups and entities pushing these things. It looks contradictory because it all ends up submitted to the same legislative bodies but that’s par for the course in a functional democracy.

      • themurphy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 months ago

        Yeah, seems weird, but there’s also points where it’s not related at all.

        One is a company using user data they didn’t tell they would use for this purpose, and illegally trying to do it anyway. They literally sell the data by making a product of it. It’s also a private company with stakeholders.

        Other is EU scanning messages, but not selling them.

        So it’s about who you trust basically.

  • 🦄🦄🦄@feddit.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    7 months ago

    Don’t worry, maybe Meta can eventually just buy the inevitable leaks resulting from the general chat surveilance the EU so vehemently tries to push through.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    This is the best summary I could come up with:


    And while this climb down has been cheered by privacy advocates, Meta called it “a step backwards for European innovation” that will cause “further delays bringing the benefits of AI to people in Europe.”

    “We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram  — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March,” the social network said in a statement on Friday.

    Without a steady diet of EU information, Meta’s AI systems won’t be able to “accurately understand important regional languages, cultures or trending topics on social media,” the American goliath said at the time.

    “In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” Almond continued.

    “We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”

    Privacy group noyb had filed complaints with various European DPAs about Meta’s LLM training plans, and its chair Max Schrems on Friday said while the organization welcomed the news, it “will monitor this closely.”


    The original article contains 589 words, the summary contains 231 words. Saved 61%. I’m a bot and I’m open source!