But for myself, the world and humanity was created with free will and it’s up to us to choose good vs evil.
That’s a terrible take: It implies that if you see something that you consider evil, you attribute it to choice, whereas the opposite is generally the case – once individuals have waded through layers of shit conditioning they are able to make choices that are actually attributable to them and not to society, upbringing, etc, and they very much do not choose evil. They might choose things that are inconvenient to others, or short-sighted, or unwise, but evil? That’s not just a different ballpark that’s a different game:
In other words: Noone, willingly, chooses imperfection. Minds, life, that would do so, would use its degrees of freedoms like that, would long have went the way of the dodo.
I’ll ignore the first half of this reply because we won’t agree. Not every choice is a conscious decision in my eyes, but the vast majority are.
As for the second half, believing that bad actors would be weeded out based on the principle of free will is naive. Consider game theory. Two people have something to gain from cooperation, but more to gain from defecting. Meanwhile, the other gains nothing or very little. That simple thought experiment incentivizes bad actions from time to time. You have more to gain by acting selfishly.
Now blow up the experiment. You vs the world and reputation is introduced. Someone with a perfect cooperation rate is flawed. They offer nothing but blind trust and can be taken advantage of. The opposite also displayed. Someone who makes selfish decisions all the time offers nothing but blind distrust. You’re left to choose which people to interact with that are somewhere along the middle of the reputation gradient. Those that are 70% or lower seem unpredictable or untrustworthy so many choose to interact with people on the higher end of the reputation spectrum when available and reflect that in their own decision making. You can’t always choose who to interact with, so eventually you’ll have to interact with a bad actor. You’ll get burned by making a cooperative choice and they will benefit from it. In turn, ensuring that they will survive natural selection.
That simple thought experiment incentivizes bad actions from time to time.
The optimal strategy, in theory and practice, for the iterated prisoner’s dilemma (unknown or infinite amounts of iterations) is some version of tit for tat, details depending on the exact rules (such as low information reliability needing increased forgiveness). The strategy involves punishing the other player for defecting but it will never defect first so two tit-for-tat players will play 100% cooperatively and the knives stay where they belong, behind their backs. Holistically speaking choosing to punish is not bad because it incentivise the other player to play cooperatively, leading to overall greater results for both.
Evolutionarily speaking: If cooperation did not give advantages, why the fuck did we become a social species? Going for anti-cooperative strategies only ever makes sense in zero-sum games and practically nothing in life is.
You have more to gain by acting selfishly.
That’s capitalist propaganda with no basis in game theory.
Not every choice is a conscious decision in my eyes, but the vast majority are.
Evolutionarily speaking: If cooperation did not give advantages, why the fuck did we become a social species? Going for anti-cooperative strategies only ever makes sense in zero-sum games and practically nothing in life is.
In game theory cooperation does give advantages.
Both co-op: +1/+1
Both defect: 0/0
Defect/co-op: +3/0
That’s just one interaction. When you expand the experiment, predictability becomes a positive trait and risk is avoided. So by more often choosing cooperation, you become more predictable, avoid the risk of not gaining any points through mutual defection, and more people are likely to interact with you. More interactions=higher potential for points. When you adjust the rules of the game to not define a set number of interactions with each player and you can choose the frequency of interactions with bad reputation players, cooperating is naturally selected for. Conversely, as the pool gets collectively nicer, defection will net more benefits and the pendulum will start to slowly swing the other way.
When you adjust the rules of the game to not define a set number of interactions with each player
Then being nasty wins out, no matter the length of the game as long as it’s known (or at least an upper bound is known) But that’s not the case in practice so it’s irrelevant which is why I specified (yes I mentioned it) infinite or unknown amount of iterations.
That mark. That thing we consider good. The innate sense, what pretty much everyone agrees on. It is there because our ancestors were successful because all that game theory stuff happens to apply. If it didn’t, then we would consider defecting good, not, to sum it up neatly, “never start a fight but always end it”.
That’s a terrible take: It implies that if you see something that you consider evil, you attribute it to choice, whereas the opposite is generally the case – once individuals have waded through layers of shit conditioning they are able to make choices that are actually attributable to them and not to society, upbringing, etc, and they very much do not choose evil. They might choose things that are inconvenient to others, or short-sighted, or unwise, but evil? That’s not just a different ballpark that’s a different game:
As a mark is not set up for the sake of missing the aim, so neither does the nature of evil exist in the world.
In other words: Noone, willingly, chooses imperfection. Minds, life, that would do so, would use its degrees of freedoms like that, would long have went the way of the dodo.
I’ll ignore the first half of this reply because we won’t agree. Not every choice is a conscious decision in my eyes, but the vast majority are.
As for the second half, believing that bad actors would be weeded out based on the principle of free will is naive. Consider game theory. Two people have something to gain from cooperation, but more to gain from defecting. Meanwhile, the other gains nothing or very little. That simple thought experiment incentivizes bad actions from time to time. You have more to gain by acting selfishly.
Now blow up the experiment. You vs the world and reputation is introduced. Someone with a perfect cooperation rate is flawed. They offer nothing but blind trust and can be taken advantage of. The opposite also displayed. Someone who makes selfish decisions all the time offers nothing but blind distrust. You’re left to choose which people to interact with that are somewhere along the middle of the reputation gradient. Those that are 70% or lower seem unpredictable or untrustworthy so many choose to interact with people on the higher end of the reputation spectrum when available and reflect that in their own decision making. You can’t always choose who to interact with, so eventually you’ll have to interact with a bad actor. You’ll get burned by making a cooperative choice and they will benefit from it. In turn, ensuring that they will survive natural selection.
The optimal strategy, in theory and practice, for the iterated prisoner’s dilemma (unknown or infinite amounts of iterations) is some version of tit for tat, details depending on the exact rules (such as low information reliability needing increased forgiveness). The strategy involves punishing the other player for defecting but it will never defect first so two tit-for-tat players will play 100% cooperatively and the knives stay where they belong, behind their backs. Holistically speaking choosing to punish is not bad because it incentivise the other player to play cooperatively, leading to overall greater results for both.
Evolutionarily speaking: If cooperation did not give advantages, why the fuck did we become a social species? Going for anti-cooperative strategies only ever makes sense in zero-sum games and practically nothing in life is.
That’s capitalist propaganda with no basis in game theory.
Oh my sweet summer child.
In game theory cooperation does give advantages.
Both co-op: +1/+1 Both defect: 0/0 Defect/co-op: +3/0
That’s just one interaction. When you expand the experiment, predictability becomes a positive trait and risk is avoided. So by more often choosing cooperation, you become more predictable, avoid the risk of not gaining any points through mutual defection, and more people are likely to interact with you. More interactions=higher potential for points. When you adjust the rules of the game to not define a set number of interactions with each player and you can choose the frequency of interactions with bad reputation players, cooperating is naturally selected for. Conversely, as the pool gets collectively nicer, defection will net more benefits and the pendulum will start to slowly swing the other way.
Then being nasty wins out, no matter the length of the game as long as it’s known (or at least an upper bound is known) But that’s not the case in practice so it’s irrelevant which is why I specified (yes I mentioned it) infinite or unknown amount of iterations.
That mark. That thing we consider good. The innate sense, what pretty much everyone agrees on. It is there because our ancestors were successful because all that game theory stuff happens to apply. If it didn’t, then we would consider defecting good, not, to sum it up neatly, “never start a fight but always end it”.