Everyone should peruse the comment thread on my last post, “Should we ride mediocre transit?” If the post and its thread helps you clarify and explain your own view on the question, then this blog is doing its job. (Yes, there’s still no tip jar; I still have a salary as a transit planning consultant, but you’ll be the first to know if I don’t!)
… to “certify” transit systems on a Bronze-Silver-Gold scale according to criteria like frequency, operating hours, accessibility, travel time and so forth.” (Emphasis mine.)
- They identify a series of measurable variables that matter and show how each of those candidates score on each of those variables. This can be a relatively objective process, especially where we’re dealing with quantifiable variables such as those Brian lists. It’s obviously tricker if we include subjective variables like comfort.
- They decide how to weigh these variables against each other to produce a composite score which can be expressed as a ranking, thus producing a soundbite small enough for a press release or a gold star that you can affix to a bus or train.
In North America, we have to depend on the self-reporting of transit agencies for many of the key variables that we all know are important, such as frequency, operating hours, and travel time, as well as outputs such as ridership. Most aggregate reporting is about the performance of agencies as a whole, or at most the performance of each of an agency’s technologies. Much of this is too coarse to tell us anything useful as consumers.
For example, the US National Transit Database, composed from mandatory reports to the Federal Transit Administration from every transit agency, will document the performance of a “bus” product, but that category conflates a frequent all-day inner-city line with a one-way express from a Park-and-Ride to a worksite that runs only a couple of trips at rush hour. To my mind, those two bus services are useful for completely different purposes in completely different development patterns. Therefore, they are vastly more different than, say, a bus and a streetcar running frequently at similar speeds on similarly dense corridors in the inner city. The categorisation of service by technology is easy for agencies to report, but it’s not always what matters, unless you’re a pure technology advocate.
In the early 1990s, I did a project for the Washington State Transit Association to develop a new reporting system for that state’s transit agencies. I wanted the agencies to report the performance of every line separately, so that the database could aggregate the results in different ways according to the needs of each query. For example, I wanted a quick report on the overall statewide performance of outer-suburban circulator routes, because I thought it would tell us (a) how much variation there really is in outcomes and (b) what services are getting the best outcomes, which would lead us to ask (c) what are those best-practice services and how can we learn from them?
Suffice to say, the idea didn’t catch on, because it was too much work for the agencies given the quality of databases at that time. Now, however, it may be possible. We may not need to wait for governments or transit agencies to do it. If you care about ridership, you’d need the agencies to give you route-level ridership data, but you can do the rest of the analysis yourself. If you just want to grade by service quality, you may already have all the data you need. Do you want to do a large-scale study of the kinds of travel time achieved by different transit agencies in similar markets? Easy: Develop a sampling algorithm and then hire a bunch of kids to run queries using the agency’s trip planner or Google Transit. Do you want to study reliability, using your own definition of how long of a gap between consecutive trips on a line is tolerable? Easy, again with some mass-labour or maybe clever programming: Use an on-line real-time information app, such as NextBus, to query actual vehicle spacing over and over, and build as large a database as you like.
This kind of work needs to be done, aggregated, and reported in ways that can be queried from many different angles, not just the technology distinctions that FTA reports. I suspect that enough people would care about it that it could be a profitable business for someone, or at least the basis for a fundable research project.
Yes, to get sound bites for the press, you might have to aggregate all the measured performance indicators and produce a “best transit lines” ranking or a system of gold, silver, and bronze medals. To do that, you’ll have to decide whether reliability is more important than frequency. That’s like asking if the lungs are more important than the heart.