Look, I'm very sympathetic to what I call the "Team Blue" argument for Wikipedia. But it's still an argument that Wikipedia is "Team Blue", and that's the right team. Have you ever tried to change an article against the grain of ideologues? I'm guessing you haven't. It's not for the faint of heart.
I understand, if you're aligned with the editor's views, it's great. It seems wonderful.
But, for example, if one of them decides you're a bad person, you're going to have a very different experience.
I have actually tried to edit something against the grain of ideologues -- who were Team Red and on Wikipedia, funny enough (they do also exist!) and it's been reverted 100% of the time. The system clearly isn't perfect, which I mention in here. But I'm not sure the alternatives offered are better and I think overall the system is largely operating in good faith. If we are going to layer in AI in some way, which is inevitable eventually, I think the Habermas Machine experiments around Community Notes and deliberative consensus have been interesting and could be useful as a way to look at NPOV.
It's a shame that Grok lifts it's non-controversial content from Wikipedia, without payment I'm sure, under the Creative Commons license. Grok should stand on the merits of its own content instead of Wikipedia's reflected authority.
About two years ago, I watched in real-time as Wiki changed it's "mind" wrt Sam Altman's siblings. Initially, it stated that he had two brothers, and no sisters. Then it stated that he did have a sister. Then he didn't. Then he did, and now he does. I wonder..., what was the Wiki-procedure for determining if his sister existed or not? Any insight as to what could have been going on behind the scenes, and why this seemingly simple issue would have needed editorial debate? At the time I thought it was Ai-hallucination, but after reading your post, that doesn't seem likely. Thanks!
Yes. I link to Wikipedia articles in my substack posts because they are usually more accurate than anything else, besides gated scholarship.
Have you seen the following very detailed analysis of how the system can be manipulated by a powerful Wikipedia editor with a vendetta?
https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin
Look, I'm very sympathetic to what I call the "Team Blue" argument for Wikipedia. But it's still an argument that Wikipedia is "Team Blue", and that's the right team. Have you ever tried to change an article against the grain of ideologues? I'm guessing you haven't. It's not for the faint of heart.
I understand, if you're aligned with the editor's views, it's great. It seems wonderful.
But, for example, if one of them decides you're a bad person, you're going to have a very different experience.
I have actually tried to edit something against the grain of ideologues -- who were Team Red and on Wikipedia, funny enough (they do also exist!) and it's been reverted 100% of the time. The system clearly isn't perfect, which I mention in here. But I'm not sure the alternatives offered are better and I think overall the system is largely operating in good faith. If we are going to layer in AI in some way, which is inevitable eventually, I think the Habermas Machine experiments around Community Notes and deliberative consensus have been interesting and could be useful as a way to look at NPOV.
It's a shame that Grok lifts it's non-controversial content from Wikipedia, without payment I'm sure, under the Creative Commons license. Grok should stand on the merits of its own content instead of Wikipedia's reflected authority.
About two years ago, I watched in real-time as Wiki changed it's "mind" wrt Sam Altman's siblings. Initially, it stated that he had two brothers, and no sisters. Then it stated that he did have a sister. Then he didn't. Then he did, and now he does. I wonder..., what was the Wiki-procedure for determining if his sister existed or not? Any insight as to what could have been going on behind the scenes, and why this seemingly simple issue would have needed editorial debate? At the time I thought it was Ai-hallucination, but after reading your post, that doesn't seem likely. Thanks!
My initial instinct was to distrust this as a source. Really interesting you can see that empirically, not to mention the hallucinations.
https://open.substack.com/pub/collegetowns/p/grokipedia-will-be-misinformation?r=7f4tk&utm_medium=ios
Spite - ever present!