Democrats rely on a ground game more than Republicans. Or maybe we do more on the ground because we're better at it. I suspect ground game tactics are appealing because their cost is lower than advertising and we usually don't have as much money to throw around as our opponents. Maybe we knock on doors more because our urban and suburban base live in houses closer together and walkable. My doorknocking where houses were spread out certainly made me think about that. Whatever the case, this much I'm sure about: our reliance on a ground game makes it important that we do it efficiently. Since we know that the most effective tactic for increasing turnout is the face to face conversation at the door, and that dropping campaign literature without talking to anyone gets us almost nothing, we should be valuing the proportion of doors where we get conversations, and we should be unimpressed by the raw number of contacts.
Yet that's not what we're doing.
This is a follow-up to Applying Moneyball to political campaigns, which I posted roughly a week and a half ago. I explained the concept of moneyball in politics at length there, so if you happened to read that, feel free to skip these next couple paragraphs. For everyone else, here's the concept.
Moneyball is a book by Michael Lewis that could be about politics — though it’s actually about baseball. Broadly though, it’s about a contest where money is important, and the contestants have greatly varying amounts of it. That means the party with less money either loses, or finds the inefficiencies everyone else is missing. In baseball, that’s what the Oakland Athletics did while Lewis followed them during the 2002 season. They were willing to ask if they were measuring and valuing the right things. They challenged their experience and conventional wisdom with data. They used what statistics said were the best strategies. In the running argument between baseball insiders on one side, and outsiders who happened to be huge fans of both baseball and statistics of which baseball has many, Oakland was the first team to let the statisticians win the debates, and they found good players who were undervalued enough to be affordable. To see Democrats’ problem, replace “baseball” with “politics”, “Oakland A’s” with “Democrats”, and “New York Yankees” with “Republicans”. Basically, Republicans have a collection of crank billionaires who can engage in unlimited spending, and we don’t. They can throw money at problems and we can’t. So we need to find the inefficiencies.
So Democrats need to ask the same questions. Are we measuring and valuing the right things? Are we putting data ahead of experience and conventional wisdom? Are we acting on assumptions rather than knowledge and thereby pursing suboptimal strategies? To answer those questions, I asked what we value, and what we could value instead. The answers were coming on two levels, a macro level like taking back Congress, and a micro level, meaning the ground game where I spend much of my volunteer time. The first post was plenty long explaining the concepts without diving into the weeds of details, so I’m making separate macro and micro posts for detailed weediness. This is the micro post.
Though I said in the first paragraph that our campaigns our valuing the wrong thing by touting the total number of contacts, it's not a useless number. It's just that it's useful only in terms of working out the proportion of doors we knocked on that turned into conversations. Since we have the research to tell us that conversations at the door are easily the most effective tactic at increasing turnout, we should be trying to figure out how to doorknock in such a way as to maximize the conversations and minimize the unanswered doors. Instead, by valuing the sheer number of doors, we're actually pushing canvassers to do a poor job. That especially matters if we're paying canvassers. If they're evaluated on their performance by the number of doors they knocked on or the blocks they covered, then we're actually providing an incentive to avoid conversations. Even volunteers will pick up on this notion that they're doing a good job by walking more sheer blocks. Really though, if we don't talk to anyone at the door, but leave some campaign literature and move on, then we're getting no more impact than a lit drop, which is campaign jargon for leaving some literature at a door and moving on without trying to contact anyone inside, which is a tactic with a negligible increase in turnout.
If we properly value the sorts of contacts we have, then we're going to try to figure out the circumstances that get us the most conversations and fewest unanswered doors. As I thought it through, I realized I was asking campaigns to experiment, which means taking a risk that by ignoring what we think we know, we may do things in the less effective way, just to get a more valid experiment. Long term, that's what we have to do. Short term, well, do you want to be the candidate who loses by a five votes because you chose to work the less promising precinct just to get a more useful experiment? So I realize what I'm asking, yet I must point out that much of what we know, not "think", but know, is because some people have run campaigns in a way to have a better experiment. I'm as shocked as anyone that a candidate who gave great latitude to political scientists to experiment with his campaign was Rick Perry. Yes, former Texas Governor Rick Perry; and he did win, partly by having information on effective campaigning no one else had at the time.
A supposed contrast between baseball and politics is that baseball has this mass of data and politics doesn't, but that's wrong. I think I read that in Nate Silver's book The Signal and the Noise, but I'm not sure so don't quote me. It appears to make sense that baseball is more approachable by statistical analysis because every pitch is a whole set of data. For each pitch there's the pitch velocity, type of pitch, location, the count, swing or take, fair or foul ball, hit or out, and a bunch more. For politics there's nothing except polling, if you have enough polls ... except that's wrong.
Politics does have something similar, and this is what I really want us to think about: every door we knock on is a set of data. Knock on lots of doors, and we have loads of data, if we'll just collect it and use it. For each knock on a door, there is:
-the time of day
-the day of the week
-days until the election
-time the canvasser waits before leaving
-weather conditions
-partisan lean of the precinct
-number of prior contacts and unanswered attempts
-type of housing (single family, condo, apartment)
-and of course the main piece of data, whether the result was a conversation, refusal to speak, or no answer.
Any of us could no doubt think of more. I don't know which data are the ones that, if analyzed, would tell us how to maximize the proportion of doors that turn into conversations. I have a guess, but the point of experimenting is to not need to guess.* And that's just it: we can and should test all this. Suppose Mondays have a 20% response rate but Tuesdays have a 25% rate. One door in four or one door in five, so what? "So what" is that would mean 25% more conversations for the same number of doors on Tuesday over Monday. That adds up.
Here's a question we could answer with a pretty straightforward test: how long should we wait at the door after knocking? How long do we wait? I know that I just have my internal clock telling me when to give up and leave, but I don't actually know how long that is, or if that's the optimum time. I sometimes notice that when I'm working in a pair, my partner goes faster than I do. I also rarely have someone come to the door as I'm walking away. So am I just waiting too long, or is my faster partner missing a bunch of people opening the door after they've left? We could test this by having canvassers go out in a pair, with one person canvassing as normal, and the other just recording the time between the knock and when either the door is answered or the canvasser leaves, and recording how often the door was answered after the canvasser left. If we're getting an answer rate of only 20% which I think is pretty common, then missing a conversation by not waiting long enough is a significant loss. Waiting too long is a loss too since time is usually limited. So why are we out there just guessing?
I recognize there are factors that might limit our ability to use what we learned. If your campaign finds that you get a clearly better answer rate at 2PM Tuesday than other times, but your volunteers are available 11AM Saturday, I have a guess at when you're scheduling your doorknock. However, consider that volunteer time isn't the only time you need to be efficient with. If a campaign has paid staff, then presumably they can be sent out at 2PM Tuesday. Your candidate's time is also a finite resource, and candidates normally arrange their schedules around the needs of the campaign. Wouldn't it be really useful to know the optimum time to have the candidate on the doors?
I mentioned the risk that running an experiment might mean the canvass is done in a less than most effective manner for the sake of future campaigns, which might seem like a poor trade-off if this is YOUR election (though I reiterate that Rick Perry allowed experiments when his election, at least the primary, was in doubt). Another risk is that you'll experiment and find that nothing affects the answer rate, and it's all just noise. At least then, however, you can know that you can schedule only in terms of when volunteers are available, or when the candidate prefers to go out, without worrying you're being inefficient.
Just to acknowledge the nuances, notice I keep referring to increasing turnout. That was the subject of the research I'm aware of. It's for turnout that we want to maximize the conversations and minimize the non-answers. That might not apply to persuasion. My guess is it does, but I'm unaware if anyone has done such research, and that strikes me as a lot harder to measure than turnout. Maybe face to face conversations at the door aren't more effective at persuasion than phone calls or lit drops. I do find it plausible --- my guess is it's wrong, but it's plausible --- that literature dropped at the door can be persuasive in certain circumstances. Specifically, lit drops might work when the residents are already certain to vote, and the candidates are in downballot races where there voter is unlikely to have formed an opinion. I'm skeptical about anyone changing a presidential vote that way, but maybe a well-targeted lit drop could work for something like county board or city council, maybe even an obscure partisan race like state legislature. But as I said here, from what I could see, campaigns that relied on lit drops got nothing for it. Maybe they lit dropped too many non-voters.
Maybe. But the whole point of this exercise is that we have to find the inefficiencies to beat our better funded opponents. So don't we want to figure out what works, and get our campaigns to stop doing anything else?
This is the Macro Moneyball post, on countering gerrymandering.
*To hypothesize, which let's just admit it, at least in this instance, is a fancy word for "guess" but gives a starting point, we would find significant differences in answer rates between different days and times, and those are what I would test if I was limited to testing just a couple things. I would also like to test whether the answer rate is higher for apartments than for single-family houses, because my impression of my own doorknocking is that such is the case and apartments tend to be neglected. Or maybe it's the same, but the short distances between doors just makes for more doors and less walking time. My impression --- impression --- is also that people in areas that get canvassed more are less likely to open their doors and more likely to refuse to talk. I would further guess that the number of days before the election doesn't affect whether residents open their doors. Nice weather probably increases the odds residents aren't home or are in back yards where they can't hear the door. Partisan lean of the precinct probably doesn't matter.
cross-posted at MN Progressive Project