A great piece by Michael Perkins in Greater Greater Washington highlights a perennial problem with on-time performance measures for urban buses. He cites the policy of the Washington area transit agency, WMATA, which says that a bus is considered on-time if it’s no more than two minutes early and no more than seven minutes late. Perkins explains, with diagrams, that under this policy you could wait 19 minutes for a bus that supposedly ran every ten minutes, and yet the bus (and the one 19 minutes in front of it) would both be considered on-time.
(By the way, the WMATA standard sounds lax to me, though I haven’t done a survey. Few agencies I’ve worked with accept anything more than five minutes late, or one minute early.)
But Perkins argues, as I often have, that when we’re dealing with high frequency services, say every 10 minutes or better, earliness and lateness are the wrong measures. Earliness and lateness matter if somebody is really going to expect the bus at 7:43. But on high-frequency services nobody does that. They just go out to wait for the next bus and trust (or hope) that it will be along soon. Many transit agencies don’t even publish exact times for very frequent services. So the thing the customer experiences is wait time, not earliness or lateness. If you care about the customer, it follows that wait time is the thing you should measure.
Suppose you went out to catch a bus that’s supposed to come every 10 minutes, but every bus on the line was exactly 10 minutes late. By any lateness standard, that would count as total failure. But by any appropriate standard, it would be perfection. You wouldn’t know anything was wrong, and in a well-managed system, nothing would be wrong.
So at high frequencies, “on time” shouldn’t be about the time the bus arrives, but the actual frequency, i.e. the elapsed time between consecutive buses. A standard might say that 90% of the time, the next bus will come in no more than 150% of the published headway — i.e. 90% of the time you won’t wait more than 15 minutes if the published frequency is 10 minutes.
Now the interesting thing about this policy is that the individual driver can’t be held responsible for it. She’s responsible for when her bus gets to the stop, but she obviously doesn’t control the buses in front of her and behind her. But an aggressive operations management, armed with current tools such as GPS to tell them where every bus is, can monitor these things, giving the necessary real-time direction to drivers about how to adjust their operations to keep buses more evenly spaced.
So I suspect that the reason this change hasn’t caught on more widely is that it shifts the onus from drivers to management. I suspect some operations managers like to have their service judged on earliness and lateness because they use the same measures to assess their drivers. This means the operations management job is easy: just monitor the drivers’ performance and deal with disruptions. But as soon as you measure actual frequency, you’re measuring how well management and labor are doing the job together. Bus operations managers don’t all have the training, or the tools, to do it that way.
Of course, most urban rail transit operations have long been managed for frequency rather than specific arrival times. A train driver in a subway system is part of a large interconnected system that is actively managed by dispatchers all the time. I think we should have the same demand for operations management on frequent, high-volume bus routes. But it’s a big cultural transition, and it won’t come overnight.