Instances, of course, have some bot-mitigation tools which they can use to prevent signups, etc.

However, what’s stopping bots from pretending to be their own brand new instance, and publishing their votes/spam to other instances?

Couldn’t I just spin up a python script to barrage this post, for example, with upvotes?

EDIT: Thanks to @Sibbo@sopuli.xyz ‘s answer, I am convinced that federation is NOT inherently susceptible, and effective mitigations can exist. Whether or not they’re implemented is a separate question, but I’m satisfied that it’s achievable. See my comment here: https://programming.dev/comment/313716

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Mastodon requires you to have a domain if I remember correctly. Maybe Lemmy has the same? Then it would cost some 10€ to get a new domain each time you get blocked.

    • o_o@programming.devOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I see. So each instance in the “fediverse”, whether Kbin or Lemmy or Mastodon could have their own rules on what to allow. Those that allow too much and get spammed are likely to lose standing in the community and be defederated by other instances.

      Requiring a domain and having a mechanism to block domains seems like a good approach to start with.

      Thank you! That cleared it up for me.

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Yes. There have been a number of ways people have exploited the inherent flaws/trust in a federated system dating back before Lemmy was even a thing. There is currently a lack of tooling to find misbehaving servers, which is an increasingly big problem. There is kinda inherently more administration and moderation required in a federated system unless you want it to be just islands of (probably large) instances that have similar philosophies and mutual trust.

  • thepiggz@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It seems like you could so long as this instance federated with any other instance without implementing any type of whitelisting or vetting. I’m just guessing based off various things I’ve read.

    • terribleplan@lemmy.nrd.li
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This is the how most instances operate at the moment. Some instances have an extensive blocklist, others have none or almost none. Whitelisting by default would be a bit antithetical to the idea of a federated social network, as any automated system for such a thing would still be able to be abused by the sort of person willing to set up a whole server just for spam or whatever and any manual system means you would probably end up with a bunch of small islands of federation and/or new instances being potentially unable to federate at all.

  • AlternateRoute@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    The more threads I read about both the federation issues (bad instances with bad rules) and lack of default user sign up validation controls. I really wonder if NONE of the design team have ever used web forums (phpBB, SMF etc) or managed an email server (default no encryption / trust, with TLS, SPF, DMARC layered on after the fact). These systems have been strugglingly with spam and bot controls for years and ZERO protection was mandated in the spec for this new open system.

    Maybe something could be learned from how block chain systems (not specifically crypto) are build where there is federation but distributed cost and tracking of identity. IE identity is global not managed by an instance, and spamming has a cost. IIRC one of the new registration options ads some cost to the registration but it is still instance based.

    I can understand the desire to be open, decentralized, and antonymous. However we still need a way to identify bot69 across the board and block them if they are a bad actor, and instances should have some form of trust built in even if it is other instances flagging them as trusted and that increases their trust or something.