Republicans are requesting a liberal Wisconsin judge recuse herself from potentially considering reviewing the Badger State’s congressional maps.
Earlier this month, Democrats asked the Wisconsin Supreme Court to reconsider the state’s congressional maps, using the high court’s opinion in a separate elections maps lawsuit as reason to consider a redo over Wisconsin’s congressional maps.
But five members of Wisconsin’s GOP congressional delegation filed a motion Tuesday asking Justice Janet Protasiewicz to recuse herself from hearing the case, pointing to comments she made when she was a candidate running for a spot on the court last year as reason to not weigh in.
Among some of the comments Republicans pointed to included her calling the state’s maps “rigged” and saying she “would certainly welcome the opportunity to have a fresh look at our maps.” However, neither she, nor her Republican opponent, detailed how they would vote on a potential case while on the campaign trail.
Please don’t project this onto me. The difference between these two things has been my core argument with you. The algorithm can be optimized to get a certain result. Your claim that the algorithm can ALSO define the desired result is preposterous circular reasoning. It’s clearly not even something you believe, yet you continue to argue with me about it.
Gibberish when you cannot objectively define what a good result is. Numerical analysis has many purposes. In this case, the purpose is to efficiently achieve a result based on input criteria. Whether or not that input criteria defines is “a fair election map” is a subjective question that reasonable people can and will disagree on. That disagreement is going to be political.
You can quit the condescension. It doesn’t impress me. I’ve been involved in data science research on similar optimization problems, so I know full well how it works and what the shortcomings are. The big one being that you need to know what is being optimized for. And “the most fair election map” is not a quantifiable outcome. You remind me of the PI on my last research project, handing me a giant dataset and saying “What does it mean?!?!”. Just couldn’t understand that I can’t give him an answer unless he has a meaningful question, and just couldn’t understand that p-hacking to come up with something publishable was pointless and dishonest.
You have your hammer and you’re SURE this problem is a nail. You just don’t even know where the jobsite is.
Yep, correct. So if you have a partisan process for deciding what is desired in the outputs, how are you going to get a good result? Which is why my entire point from the beginning was that chasing down magical algorithm solutions is pointless if you do not START with addressing the political problem.
In this case, Wisconson’s process for redistricting is partisan. That’s why this thread exists. That’s the problem being discussed. That’s the original sin. You’re not going to datascience a solution to that problem.
And moreover, these huge redistricting controversies all have that same thing in common – a partisan distracting process. Like magic, the issues seem to largely go away in places that don’t have a partisan redistricting process. GEE, WHO WOULDA THUNK IT.
It’s not an assumption. That is our starting point here. You need to listen to the people you are talking to and understand what is being discussed.
IDGAF if you want to use machine learning to design maps. That’s fine. Could be very helpful in making the maps, I’m sure.
But only AFTER you have limited the effects of politics in the decisions about the algorithm design. Otherwise, all that machine learning is still going to get you corrupt maps. They’ll just be really, really good and and hard-to-understand corrupt maps. Corrupt maps that achieve the goals and obfuscate the means they became corrupt through layer after layer of mostly-opaque and essentially impossible-to-comprehend models, graphs, and networks.
I’ll end this discussion by summarizing it for readers:
Me: if a partisan process is used to build maps, algorithms won’t get you fair maps. Start by addressing the partisan process.
You: algorithms can be optimized to get you perfect election maps and they can know what election map is perfect because they optimized to get a perfect map. This other guy is a dumb idiot who doesn’t know how algorithms work.
…and there’s the missing link that explains why you’re acting this way. I’ve worked with enough data scientists to see what’s going on. I don’t trust data scientists unless they have some other field of expertise in addition. People who’ve been involved with data scientists enough to see what they’re doing are probably going to often have similar outlooks to you.
I’m not trying to impress you. You may think you’re telling the truth, but you are in fact spreading misinformation. There is a 0% chance that you are correct. You don’t understand the subject and you’re just bloviating. It’s inherently offensive.
I wasn’t impressed by your summary, so I made my own:
Me: With a basic understanding of logic and algorithms, there will definitely be a hands-off computer approach to redistricting that will lead to better results than anything a human can make. This is guaranteed. There is no way to dispute this.
You: I will pretend like I never said anything like “Algorithmically-decided districts will also inherently ignore communities, both historic and demographic, again creating a high cracking likelihood and creating outsized representation to the dominant political groups.” or “These tech-bro-thinking solutions will never be the answer. The answer to redistricting* is to have a controlled political process with checks and balances.” I will now pretend like I had a different, less stupid opinion, than the one I started with and that I have been arguing this entire time. I am doing this because either I was up to this point, completely unable to articulate myself, or because I have secretly realized that I was mistaken, and now I am too stubborn to admit it.