Sunday, October 24, 2021

In defense of those who claim to have "done their own research"

For a brief time in the late nineties, I wanted to be a scientist. I'd never liked science in high school, and had been ignorant of many basic science facts into my mid-twenties, enough that I was probably borderline science illiterate. In the last of my six years in the Marine Corps, though, someone left a Carl Sagan book lying around somewhere, and for some reason, I picked it up and read it. That led to me reading another Sagan book, and then another, and then I started picking up science textbooks from used bookstores and reading those. I also started self-teaching myself the algebra I hadn't done well in in high school. After about a year of this, I got discharged and was ready to start college. For my first semester, I was a chemistry major. 

I did well in my biology and chemistry 101 courses, but I realized that I was behind in math. I'd caught up in algebra, but there was a still a chasm of geometry and calculus I needed to leap over before I'd be really ready to study science seriously. I was already in my mid-twenties, and I felt some pressure to get a degree in a hurry and get on with life. Most of the people I'd gone to high school with were starting real jobs, and I felt a little ashamed that I was only just starting college. Rather than spend the time to catch up in math, I switched to liberal arts, eventually settling on English. 

For a long time after I gave up the dream of studying in the natural sciences, though, I still wanted to feel like I belonged among scientists. Since I didn't have the chops to actually be one, I did what most wannabes do: I aped the language. I tried to talk like I thought a scientist would. 

One instance of this sticks out in memory. Many people from anti-GMO, pro-holistic/herbal/non-Western medicine, or anti-vax camps would rail against putting "chemicals" in their bodies. Scientists would often snigger at this (and I would snigger along with them), pointing out that all matter is made up of chemicals, so it's impossible to avoid consuming chemicals. They'd say water is a chemical compound, so to be anti-consumption of chemicals meant to be anti-consumption of water. 

Over time, though, I came to feel like this was an overly fastidious definition, one held to only for the sake of belittling and discrediting those whom the majority of scientists disagreed with by making them out to be so little versed in science, they didn't even know how to use basic terms correctly. While I agree that most of the people who rail against consumption of chemicals--say, Gwyneth Paltrow--are likely to be completely wrong about nearly everything, I think this game of definitional gerrymandering is dishonest. 

Many terms have more than one meaning, depending on the social context, and the fitness of one of the meanings of a term shouldn't be determined by whether it is the preferred meaning of a certain group, but on whether those who share the context can reliably interpret what the term means. So when my Net+ course tells me that "ethernet cable" isn't really a correct term, I'm fine with that statement insofar as it's a statement within the context of a class on Ethernet. However, when normal people use the term, we know exactly what they mean be it. If I went into a Best Buy and asked where the ethernet cables were, the clerk would know where to point me. There is a specialist meaning of the term "ethernet," and there is a lay person's meaning of the term, and both can be correct, as long as they are used in the right context. 

For the term "chemical," it can have one meaning within the context of a science class (anything that has a chemical structure, which is to say, all matter), and there is a more general use of the term (a thing made by a chemist, i.e. something artificial). To attack a position for using the lay meaning of the term in a context where it's understood the lay meaning is intended is a disingenuous method of argument. 

"I did my own research"

I've been witnessing what seems like another instance of this form of a discredit-through-definition attack lately against those who are resistant to getting COVID-19 vaccinations. The argument goes something like this: "Research" means original research, performed in a laboratory, by specialists using expensive equipment who produce technical conclusions that will then be peer-reviewed. Therefore, when people say they "did their own research," but mean that they read articles or watched videos about COVID-19 and vaccinations, they do not even understand what "research" means, and everything they say should automatically be rejected. 




Much like when I object to an overly narrow definition of "chemical," it feels a little bit strange to me to be defending the side I mostly disagree with. I got vaccinated as soon at it was available to me. To some extent, I also "did my own research," but for the most part, I got vaccinated because that seemed to be the prevailing recommendation. I did enough "research" to determine what experts were recommending, and I did that. None of us can research every subject under the sun, and I didn't feel compelled to put in a lot of time on this one. 

Prior to two years ago, would anyone have objected to my use of "research" in that last paragraph to describe my attempt as a layperson to determine what the prevailing wisdom on a subject was? Don't we use the word in this sense all the time, in a way that more or less means "to read up on something"? When a high schooler goes to the library to gather materials for a paper on gene therapy, we aren't saying that high schooler is an expert on the subject, but we still call this phase in the process of writing a paper "research." When my mother uses online resources to trace her ancestry, nobody pretends that means she is an expert in genealogy, nor a historian. Prior to 2020, though, if she'd said, "I did a bunch of research and found a few more headstones to go visit," nobody would have objected to that use of the term. To a scientist who does research for a living, the word means one thing, and to a person who does something else for a living, it generally means another. People aren't wrong for using the term in the more general sense.

Why would I worry about playing fair with people like Chipper Jones, whose claim to have "done his own research" far exceeded his antagonists in terms of arrogance and dismissiveness of those who disagreed with him? Maybe because there's a precedent here that I don't want to see set. All of us have to make hundreds of decisions on matters in which we are not experts. For some of these decisions, the stakes are low enough we can go along with the crowd, but for some, we are going to have to decide for ourselves, which means wading in and making the best decision we can on a topic where we lack expertise in key areas relevant to the question at hand. 

For example, how should you pick a political candidate to vote for? Nobody is an expert in everything relevant to picking a candidate: public policy, geopolitical affairs, economics, etc. Should we simply rely on experts in these fields to tell us what to do? Which fields are the most important? And what if the experts don't all agree? Should we try to determine what the majority opinion of experts is and go with that? None of this sounds like a good way to pick a candidate. In a democracy, we all have a moral responsibility to develop our own beliefs (through what you might call research) and to pick candidates we think (based on more research) best align with those beliefs. We can't pass the buck on something like our votes by relying on expertise.

What about religious beliefs? Should we poll all the people with Ph.D.s in the world, see what the majority religious belief is, and go with that? Of course not. For one thing, not all doctoral degrees would be equally relevant to questions about religion. The sciences or philosophy or history might be more relevant than public policy or educational theory. We will of course want to read the works of great minds to determine our own beliefs, but it's up to each of us to determine what weight to give those we read. If one scientist says cosmology supports belief in the existence of God and another scientist says the opposite, I need, to some extent, to judge between them myself, even though I have only the most basic grasp of concepts in cosmology. We all have to make important decisions all the time through research on topics we are not experts in.

Fine, you may say. Those are broad abstract topics, but COVID-19 vaccinations are a subject of limited breadth and a very practical application. Surely on this, we should trust the experts? But there are practical issues all the time when people need to question experts. Do you trust the guy who might be giving you the runaround at the mechanic's, even though he knows more about cars than you, or do you trust your gut telling you he's trying to bilk you?  

At the moment, I'm dealing with an issue in my foot known as Morton's neuroma. I've been seeing a podiatrist for months, and nothing he's tried so far has worked. I might have to get surgery. But my podiatrist gave me a time period for recovery that is at odds with everything I've read on reputable websites. I'm talking vastly different, like four times as long. I don't claim to know as much about feet and neurons as my doctor, but I do think I've done enough "research" to question why he's telling me something like this. I think I should go ask another doctor before I get surgery, rather than trust this doctor's expertise. Would anyone argue I should ignore my instincts because my podiatrist is an expert and I'm not? 

The fact is that while we all have to rely on experts to tell us things they've spent their whole lives learning about, we also realize that those experts aren't infallible. Sometimes, we have reason to think we're seeing mistakes, but thinking this does not mean we think we could do the jobs of those experts ourselves. Personal flaws like inattentiveness, lack of effort, lack of empathy, and greed affect experts as much as the rest of us, and non-experts are capable, sometimes, of picking up on these things. Sometimes, experts make mistakes because they are tired, overworked, or forced to work in bad conditions. If there's anything I've learned about being an adult, it's that you don't become magic when you grow up the way I thought would happen when I was a kid. If I still make mistakes all the time in the profession where I am an expert, it's reasonable to think others will, too.  

In the case of COVID-19 vaccinations, recommendations to get vaccines, and the vaccine mandates that are based on those mandates, come from a body that is part of the federal government. While the goal is for bodies like the CDC to be independent of political considerations, I can tell you from personal experience that true political neutrality in government work is easier said than done. Beyond the political considerations, though, a citizen needs to bear in mind the inefficiencies of a bureaucracy. The CDC sometimes makes bad calls for no other reason than it's a government agency, and government tends to move slowly. That's why the CDC tends to lag behind what research is telling it. Remember how long it took for the CDC to recommend wearing masks at the beginning of the pandemic? It wasn't because wearing masks didn't make sense prior to that; it's because the CDC doesn't work perfectly. A citizen wearing a mask prior to CDC mandates, as several of my Korean friends did, would have been opposing CDC recommendations based on their "own research," but that citizen would have been right. While most people aren't better scientists than those employed by the CDC, that doesn't mean we are wrong to be skeptical of their proclamations, for reasons that sometimes have nothing to do with science. 

All of us have to make decisions every day on complicated questions outside our expertise. We do research--and it deserves to be called research--as best as we can, and we make the best decisions we are capable of. As a democracy, we ought to honor this effort, not belittle those who've come to different conclusions by mocking the entire notion of a citizen attempting to independently make sense of what she's being told.  

I realize that a lot of the research done by people opposing vaccinations isn't good research. Even by my expanded definition of research to mean any kind of "learning more about" something, that still should mean reading well-reasoned and reliable sources, not any crazy thing on the internet. If someone claims to have done their own research and believes the government is putting trackers in vaccines, the problem isn't that they didn't do original research in a lab, but that they didn't research good secondary sources. (For what it's worth, I have one friend who is a bona fide MENSA-level genius with a Ph.D in math and an excellent command of many topics in science who isn't a fan of COVID vaccinations. It's not all crazy people.) 

The fact that some people don't do good non-expert research in order sharpen their views isn't a knock against citizens verifying information as best as they can. Nor should it be used to undermine the idea that a minority may come to valid but differing conclusions. We all have a duty to perform the best research we can so we can have the best opinions we can. We all share in what Lionel Trilling called "the moral duty to be intelligent." Mocking those trying to do their own research by stating nobody who isn't an expert can do meaningful research is an argument against the critical work all of us need to be doing in a democracy. 


Sunday, October 3, 2021

Submittable settings might make objectivity harder for editors

 I have a friend who is kind of obsessive about not wanting to talk about movies he hasn't seen yet. It's not just that he wants to avoid hearing spoilers; he wants to not even hear a very broad opinion, like, "I liked it." He feels that even hearing this generic endorsement will color how he watches the movie, and he'd prefer, when he watches, to be doing so completely free of outside influence. 

That's hard to do for movies or shows on streaming services where buzz makes opinions ubiquitous in everyone's social media timelines. To be sure you don't get contaminated, you'd have to avoid social media and continually remind your co-workers within earshot that you want to avoid hearing any mention of the show. That's a lot of work, and likely to make you an unpopular co-worker with some people who really like to talk about what they've seen. 

It shouldn't be that hard for editors of literary journals to do with stories submitted for consideration, though. After all, these are unpublished works they're dealing with. The only people who've seen them, maybe, are small workshop groups. There's no danger of having been contaminated by a public discourse on a story that's still seeking to enter public discourse. 

Except there is. The danger arises from the way Submittable presents work in progress to editors. In a typical set-up, stories sit in a queue, usually according to the date submitted (you can arrange them by other criteria, but this seems to be the fairest way to go through a queue, starting with what came in first and working to what came in last). There are a number of ways journals handle the first stages, depending on the preferences of the head editor and the staff on hand. 

One common method is for first-line editors to pick entries and vote on them. Once the first vote is made, subsequent readers can see that a vote has been made and, more importantly, which kind of vote. 




The snip above is from my own Submittable work queue. I was the one who voted no on the three entries you see there. This means that everyone except the first reader (me) is going to already know what another reader (me) thought before starting in on reading. If that second reader has any particular feelings about me, those could end up influencing the next vote. It could be, "Jake's usually a good reader, so I'll probably agree with him," or it could be, "I hate Jake, so I'm going to vote the opposite of whatever he said," or it could be anything in between. The point is that my vote is likely to have at least some influence on the next votes, even if it's an unconscious influence. And that means objectivity, always difficult to achieve for judges, is going to be a little bit more tainted.

For many journals, the majority of readers doing the lion's share of the work are new. The work is unpaid and grueling, so it's understandable why journals would cycle through readers. When someone new comes on board, it's natural for them to feel their way out before they get comfortable. When I read for the Baltimore Review, I had two conflicting impulses: to vote with the majority so people didn't think I was a pain in the ass, and to vote against the majority so it appeared I had a unique take that made me valuable to have around. Both of these impulses were a distraction from what should have been my only desire, which was to vote the way I really thought. 

No matter which impulse I followed, the presence of other votes represented an influence on me. This was especially true because the Baltimore Review used a two-strikes-and-you're-out approach: the editor figured if two readers both didn't like something, it had too long of an uphill climb to make it, and she'd send a rejection notice. That means that once I saw something had a down vote, there was a motivation for me to go in and vote no, too, because then the story would be out of the queue, which felt like progress. 

A lot of journals use a blind reading to protect them from knowing who the writer is and being influenced by that. They do this in the interests of fairness. Journals should probably also consider protecting themselves from their own influence internally. It's possible there may be some way to configure Submittable settings so you can see that a vote has been made, but not know what the vote was or who made it. But if so, it's not the default setting, and I sure can't figure out how to apply it. A journal could instead have everyone send a private note to a central editor with votes and thoughts, so that only the central editor could see them. But that's a big burden on that one editor, and a system like this would mean Submittable wasn't much more efficient than a journal working entirely off of email. 

If a technical solution became available, making anonymous and masked voting possible, that doesn't mean votes shouldn't stay anonymous and hidden forever. Once enough are in, the blinders could come off, and if necessary, editors could have discussions among themselves and argue through points of disagreement. The idea isn't to avoid disagreement. Quite the opposite. It's to avoid agreement that comes too easily. Journals struggle to achieve diversity in their editorial staff in order to be fair in judging work. That diversity can be undone, though, by subtly encouraging groupthink through the voting process. A simple tweak to Submittable could probably do a surprising lot of good for encouraging diversity. It would certainly be an interesting experiment for journals to try and see if they get more disagreement than they've had before.