Everyone should peruse the comment thread on my last post, “Should we ride mediocre transit?” If the post and its thread helps you clarify and explain your own view on the question, then this blog is doing its job. (Yes, there’s still no tip jar; I still have a salary as a transit planning consultant, but you’ll be the first to know if I don’t!)
… to “certify” transit systems on a Bronze-Silver-Gold scale according to criteria like frequency, operating hours, accessibility, travel time and so forth.” (Emphasis mine.)
- They identify a series of measurable variables that matter and show how each of those candidates score on each of those variables. This can be a relatively objective process, especially where we’re dealing with quantifiable variables such as those Brian lists. It’s obviously tricker if we include subjective variables like comfort.
- They decide how to weigh these variables against each other to produce a composite score which can be expressed as a ranking, thus producing a soundbite small enough for a press release or a gold star that you can affix to a bus or train.
In North America, we have to depend on the self-reporting of transit agencies for many of the key variables that we all know are important, such as frequency, operating hours, and travel time, as well as outputs such as ridership. Most aggregate reporting is about the performance of agencies as a whole, or at most the performance of each of an agency’s technologies. Much of this is too coarse to tell us anything useful as consumers.
For example, the US National Transit Database, composed from mandatory reports to the Federal Transit Administration from every transit agency, will document the performance of a “bus” product, but that category conflates a frequent all-day inner-city line with a one-way express from a Park-and-Ride to a worksite that runs only a couple of trips at rush hour. To my mind, those two bus services are useful for completely different purposes in completely different development patterns. Therefore, they are vastly more different than, say, a bus and a streetcar running frequently at similar speeds on similarly dense corridors in the inner city. The categorisation of service by technology is easy for agencies to report, but it’s not always what matters, unless you’re a pure technology advocate.
In the early 1990s, I did a project for the Washington State Transit Association to develop a new reporting system for that state’s transit agencies. I wanted the agencies to report the performance of every line separately, so that the database could aggregate the results in different ways according to the needs of each query. For example, I wanted a quick report on the overall statewide performance of outer-suburban circulator routes, because I thought it would tell us (a) how much variation there really is in outcomes and (b) what services are getting the best outcomes, which would lead us to ask (c) what are those best-practice services and how can we learn from them?
Suffice to say, the idea didn’t catch on, because it was too much work for the agencies given the quality of databases at that time. Now, however, it may be possible. We may not need to wait for governments or transit agencies to do it. If you care about ridership, you’d need the agencies to give you route-level ridership data, but you can do the rest of the analysis yourself. If you just want to grade by service quality, you may already have all the data you need. Do you want to do a large-scale study of the kinds of travel time achieved by different transit agencies in similar markets? Easy: Develop a sampling algorithm and then hire a bunch of kids to run queries using the agency’s trip planner or Google Transit. Do you want to study reliability, using your own definition of how long of a gap between consecutive trips on a line is tolerable? Easy, again with some mass-labour or maybe clever programming: Use an on-line real-time information app, such as NextBus, to query actual vehicle spacing over and over, and build as large a database as you like.
This kind of work needs to be done, aggregated, and reported in ways that can be queried from many different angles, not just the technology distinctions that FTA reports. I suspect that enough people would care about it that it could be a profitable business for someone, or at least the basis for a fundable research project.
Yes, to get sound bites for the press, you might have to aggregate all the measured performance indicators and produce a “best transit lines” ranking or a system of gold, silver, and bronze medals. To do that, you’ll have to decide whether reliability is more important than frequency. That’s like asking if the lungs are more important than the heart.
The problem with a ranking is that we already know which routes will get the top scores, because they’re the ones that have high ridership. Do we really need to be told that the best local bus routes in America are the north-south buses in Manhattan, the buses they run on Geary, and maybe a couple of errant lines in Brooklyn, Chicago, and Los Angeles?
I assume from your examples that you mean the top scores in ridership per unit of cost. Two responses.
1. Yes, such a scoring is useful even if it proves things that are obvious to you and me, such as that density drives ridership. People love empirical data and discovering things in it for themselves.
2. Much more useful is the ability to sort services into groups of similar things and compare like with like. For example, if your city doesn’t have a Wilshire corridor, it’s not going to get Wilshire corridor transit outcomes or care about that. What it needs is to see how peers are doing at serving similar kinds of markets across the country. That’s what I was trying to do in the Washington State study, and that’s the kind of reporting and scoring we need more of.
Here’s my take:
Firstly, I would base the measurement based on ‘urban areas’ or ‘cities’ rather than ‘transit systems’. This would overcome the issue where a metropolitan area is served by multiple transit authorities or systems, based on (say) 1920s city boundaries. Or a city with subway or frequent trams in the inner suburbs but much less service beyond.
This ‘Metropolitan Region’ area could be based on land use factors, residential density, functional relationships, distance from CBD etc with rural type areas closer in to the CBD removed. It could be a statistical region that may or may not be based on local government boundaries. In a large Australian capital city it would be the populated part of the metropolitan area up to say 40 to 60km from the CBD.
Once the urban area has been defined one can count the number of people and jobs. Then assign a percentage, eg 80% of residents or 90% of jobs, that you want to apply the transit service standard to. Combined with an access distance or time (eg 800 metres or 10 min walk to the nearest stop or station that meets full service standards) this is our coverage standard. Hence ‘80% within 800 metres…’
There needs to be a service span standard. For any large city worthy of name this needs to be from early morning to late evening, 7 days a week. And some form of service 24/7 for the really major cities. But for this purpose let’s say full-time service = 6am to midnight (maybe with a later start on Sunday).
Thirdly a frequency standard. Ideally this should be a constant figure, whether day at night, weekday or weekend. Along with coverage and span, frequency is one of the three elements that define the capabilty of a transit system (more here http://melbourneontransit.blogspot.com/2006/07/capability-choice-and-capacity-three.html ). A more detailed look at what various frequency levels mean is here: http://melbourneontransit.blogspot.com/2006/06/service-frequency-theory-and-transport.html I will suggest a frequency of 15 minutes.
Ideally this would apply throughout the service span, but at least in Australian cities this is a high rarely attained standard (Brisbane BUZ routes may be an exception) so a 30 min level for evenings (especially) but also weekends might be more useful for ocmparisons.
So our comparison could be along the lines of X% of the population is within 800 metres of a full-time service that runs at least every 15 minutes day/30 min night and weekends.
This is still not foolproof – eg Melbourne has a lot of 20 minute running on trains and trams (and 30 min on its better bus routes), so only a small proportion of the urban area would meet a 15 min standard, but much more would meet a 15/30 min standard.
Perth has 15/30 on its trains and a relatively small number of its better bus routes.
Brisbane trains don’t comply but its BUZ routes do. The same could be said of Adelaide which has a substantial number of ‘Go Zone’ bus routes that would comply with a 15/30 standard.
Sydney varies with some very good STA bus routes and well served stations but many that are less so.
Canberra is not even in the race with only a good intertown route that only a tiny percentage of the population can walk to.
As for the ‘proportion of residents near semi-good service’ my guess is that Adelaide might come out in front as it’s Go Zone network is large relative to its city size, and probably proportionately larger than Melbourne trams and Sydney Buses (which nevertheless offer high service in the corridors they serve).
Hence the above performance measurement tends to favour cities with a large number of moderately frequent bus corridors compared to those where the high service is concentrated in the inner area (Melbourne trams), suburban rail (Perth) or a relatively small number of very frequent bus corridors (Brisbane).
Nevertheless it’s not a bad approach. The Victorian Government appear to be moving towards a statewide 7 day/week hourly service standard applying to Melbourne local buses, rail to major regional cities and regional city local buses. The finish time is 9pm for Melbourne buses, a bit later for regional trains and a bit earlier for regional city buses. The maximum access distance though is only 400 metres, so is a tougher standard than my 800 metres.
Peter, the problem with those standards is that the US, Canada, Australia, and New Zealand all rank toward the bottom in their transit systems’ size and usefulness. You compare those areas’ public transport systems to one another, but any comparison to what’s available in Europe or East Asia will reveal glaring deficiencies. In Tel Aviv, itself a city with very bad public transit, some bus lines run on 3-minute headways during rush hour, and even so-so bus lines serving various neighborhoods run on sub-10-minute headways.
Seattle has a number of 15/30 routes set out in a table (http://transit.metrokc.gov/tops/bus/serv-freq.html), rather than a map (and fewer in number than Adelaide).
Error: the link is http://transit.metrokc.gov/tops/bus/serv-freq.html
First off, thanks to Jarrett for picking up on my barebones idea and sharing his experiences with transit evaluation research. My thought was that it would be useful to create some kind of rating system to demonstrate what an “adequate” transit system (or as Peter Parker points out, the transportation available in an urban area) looks like, compared to a “good” or “excellent” or whatever labels would sound best. The aim is not to determine where Chicago or New York or Toronto rank on a list, but to see how they fare against some absolute criteria. How useful is being ranked #1, if large portions of the city are inaccessible by transit that is too infrequent, unreliable or restrictive in hours of operation?
As an example, take one of Peter’s criteria, “X% of the population is within 800 metres of a full-time service that runs at least every 15 minutes day/30 min night and weekends”. If a large city or region doesn’t meet that standard, they might achieve “Y% of the population is within 1 kilometer of a full-time service that runs at least every 30 minutes day/1 hour night and weekends” (where Y% is less than X%). So, the city gets a Bronze instead of Silver, and people may be upset enough to push their elected officials to improve transit in concrete ways, rather than vague promises of improvement and/or a big project that looks nice in the papers but only has a minimal effect on transit in the whole area.
This point brings up the issue of criteria and how to weight different factors. I share Jarrett’s concern about how to measure subjective factors like comfort and safety (which often includes a great deal of perception beyond actual crime rates), and haven’t figured out how that could be assessed reliably across systems/regions – the only thing I could think of is to use ridership as a proxy, but that would be a very crude tool especially if large segments of the population have no viable alternatives. In terms of weighing factors, my thought was to have a list of minimums for each level (e.g. minimum frequency, minimum coverage, minimum reliability, minimum operating hours). Of course, there would be value judgements and preferences involved in setting those minimums. For example, my thought would be that having good Sunday operations are not *as* important, at least for an “adequate” system, though someone without a car whose only option is to work on Sundays would probably disagree. These underlying values will probably be the focus of my next post on this idea.
Well, this comment turned out almost long enough to be a post in its own right! Thanks for the feedback, and I’ll keep at it to see if this leads me anywhere useful.
Sunday service is critical. You can see what happens when it’s not there in Israel, where public transit shuts down on Saturdays: people don’t commute by bus unless they have to, and whenever they can afford a car, they buy one.
As for the minimum standards, they’re only useful if you’re comparing bus systems in free-flowing traffic. In large cities, people expect subways, light rail, and feeder buses that are faster than walking. Speed, reliability, accessibility, and connectivity become critical factors. And ridership is really the only way of combining all of them – otherwise, you could always squint your eyes and twist your head and find some way in which New York has better transit than Tokyo.
Note that the example of Sunday is a reason why these things are likely to
be culturally relative, to a degree.
Many US transit agencies are approaching the point where Saturday midday
ridership looks a lot like weekday midday, and Sunday is catching up with
Saturday. In Australia, where I live now, no city is close to that,
though they’re moving in that direction.
My own view is that the 24/7 city is the future, and that if the goal is not
just to raise ridership but to drive down car ownership, you need good
Sunday and evening services so that people can feel more confident about
relying on transit for many of their needs.
But I wouldn’t go into a small Australian or NZ city saying that, because
Sundays are so quiet there that I’d be laughed at if I said there needed to
be lots of service then.
So nobody is going to be able to define standards to that level of detail
for comparison across all developed-world cases. However, there is a core
notion of frequency thresholds — 15 minutes seems pretty common — that do
seem to work in a lot of cities as the level that makes transit convenient
for spontaneous and flexible use. Offering such service 14 hours a day five
days a week seems to be the minimum, but it should extend to whatever time
periods much of the city is active. Each city or transit agency has to make
its own decision about when that is.
Jarrett, I’m just rehashing what’s been said in the mediocre service thread, but the very minimum span of service agencies should provide is at least 16 hours, something well into the night.
When a system shuts down at around dusk, that reflects the planners’ bias that most transit riders have a work schedule like the planner.
As service work has come more prevalent in the developed world economy, it means odd scheduling hours and nontraditional shifts. The hardest shift for a transit agency to serve is the swing shift, where a worker gets in during the afternoon and leaves late at night. A graveyard shift worker has at least the option of taking the last evening bus and going home on the first morning bus should there not be owl service available.
This is why 16 hours should be the minimum standard and 20 hours should be the ideal. Owl (12 a.m. to 5 a.m.) services are a case-by-case basis and agencies should be prepared for severely underused services with no growth potential — or, as big cities have found, rolling homeless shelters.
Also, as you have mentioned, Saturdays and Sundays present a different service dynamic that transit agencies should also adjust to. Saturdays are still work days for many people, but also they are family days. Agencies should look at providing “friend & family fares,” where a party could ride for a reduced price. Sundays have the added complexity of church ridership, where a few trips can swell in ridership like school trippers.
Wad. Agreed, except for this little slur:
When a system shuts down at around dusk, that reflects the planners’ bias that most transit riders have a work schedule like the planner.
No, it reflects the fact that ridership plummets after the evening peak, unless you’re in a very big city with a very strong nightlife. In a smaller city or suburban area, evening ridership is generally terrible. Fewer people are traveling and concerns about personal security go up after dark.
Having said that, I’m surprised that more cities don’t have seasonally adjusted ending times, running later in the summer when it’s light later.
Especially the farther you get from the equator… here in Portland, OR; it’s pitch black outside at 5:30PM in the dead of winter. And we’re only halfway between the equator and the pole…
5:30? In New York, which is at a lower latitude than Portland, the sun sets at 4:45 in December.
I said “pitch black”. According to this website, our sunset on Dec 21 will occur at 4:36, and the “end of civil twilight” will be at 5:07.
By 5:30, it’s darker than coal outside.
Hey, before you guys come to blows about when it gets dark, note that
longitude is a big factor too, specifically where you are relative to the
reference longitude of your time zone. In one town in Indiana, it gets dark
at 5:00, while just down the road and in the next time zone, it gets dark at
Reading my post two days later, it sounds WAY more aggressive than I intended it to be. 🙂 Of course, Jarrett is correct.
Though Portland happens to be pretty far to the west as far as Pacific TZ goes.
“But to do that, you’ll have to decide whether reliability is more important than frequency, and that’s like asking if the lungs are more important than the heart.”
No! No! No!
Does this reflect the fundamental problem with planners? It is not up to us to decide this!
We are not designing passive infrastructure like a sewer line. We are offering a competitive service that people choose (or not).
We shouldn’t be deciding on standards, we should be asking the customer!!
And, in fact, there is data on what people want, coming from good transit agencies (ie, Metra in Chicago) that survey their customers. #1 is reliability. I’ve forgotten the rest except that I think travel time was more important than frequency of service.
I’d hope that a good transit agency has these answers for it’s own territory . . . and not as one big blunt instrument aggregate number, but broken down by market segment, so they’d know if (for example) reliability is more critical for commuters in certain industries than others.
So I realize this response is a little tangential to the main topic of ratings, except that I’d hope our ratings measure how successful the transit service is at meeting people’s needs, as measured by customer satisfaction and market share. [market share being a more important measure than ridership. If you’ve got half the people riding who work at a plant, that’s impressive — even if it’s only a 20 person shop.]
I’m not entirely sure who you’re saying “No! No! No!” to. It sounds like
we’re in complete agreement.
The customer, of course, isn’t sovereign either. There are too many, and
they’re too different from one another. But I certainly agree that customer
surveys can be an important tool for settling these choices, and I hope you
agree that the whole point of my post is that planners, or raters, shouldn’t
be making these choices between competing goods.
I think every survey I’ve ever seen says that reliability is #1 in importance. This also matches the numbers: unreliability loses passengers at a tremendous rate, and reliability will regain them after a period of cautiousness.
So start out by measuring reliability. For a system which is significantly deficient on reliablity, it simply doesn’t matter how good everything else is, people are going to perceive it as substandard.
Frequency and travel time are actually closely interrelated, because what actually matters to people is *personal* travel time, which depends on how long you have to wait for the bus or train! If you consider these jointly, I’m pretty sure the only other metric competing with it for importance (after reliability, which is #1) would be coverage — can you use transit to get there at *all*? And this is again, closely interrelated. So, somehow, there ought to be a way to measure the interrelated combination of all three of them… but I think, right now, there isn’t.