A recent post looked at the challenge of surveying the public and identifying what mixture of taxes and fees they would be willing to pay to fund a widely desired infrastructure plan. In the Sydney Morning Herald‘s Independent Inquiry into public transport in Sydney, we did exactly that, using a survey team from the University of Technology at Sydney’s Centre for the Study of Choice. One commenter caught the crucial point about why polling is so difficult, and why its results are often hard to trust:
There’s always a difference between what people say they want, what they actually want and what they actually do.
Indeed there is. In the surveying biz, “what people say they want” is called stated preference. “What people actually do” is called revealed preference. Everyone prefers revealed preference data. We try to glean what we can from our current ridership data, for example. But revealed preference data is all about the past, and conditions are different in the future for which we’re planning. In fact, one common sign that a transit agency is conceptually stuck is when they think and talk only about their present riders, not new ones they intend to attract.
Infrastructure projects are all based on assumptions about how people will behave in the future. To talk about those, we can talk about revealed preference from other places. For example, we can quote ridership on one transit line as a reason to expect ridership on another.
But for a city like Sydney, which cannot dream of the kinds of Federal funding that US cities are used to, the real question is what people will tolerate paying — in taxes, fees, fares, and road charges — to fund a system that they support. And for that, we have to ask them. We’re asking them about a hypothetical: “What would you be willing to pay if …?” We’re stuck with stated preference data, and most of it is worthless.
You’ve probably done stated preference surveys. In our business, they ask questions like “Would you ride transit more if …?” or “Would you support a tax increase of $x to fund this rail project?” Most people have no idea. The question may not accurately describe the factors that would really determine their actual response. They may not understand what the funding sources (tolls, fares, etc) would mean in their lives. So people guess. They make stuff up. They say what they think the surveyor wants to hear. They give different answers depending on the sequence in which the questions are asked. They generate mounds of meaningless data.
So we (or rather our survey experts at the University of Technology, Sydney) tried something a little more subtle, called a discrete choice experiment. It turns out that if you aggregate a lot of stated preferences from the same person, they can add up to something like a revealed preference.
We wanted to find a combination of funding sources that would attract majority support and be adequate to fund the investment program. Rather than just ask people single hypothetical questions, we asked each person about a whole series of scenarios, each with different combinations of funding sources. We compared a “high investment in public transport” scenario, which we already knew to be the most popular, with a “high investment in roads” scenario and a “low investment in both” scenario. Respondents went through about 10 of these. They were asked to identify which scenario they preferred, but each time the specific funding sources and levels were different. Here’s what just one of the questions looked like.
(Obviously, this has to be done on the web rather than over a phone. Our survey “panel” was 2400 people who are used to doing surveys on the web, but who are, in all other respects, strictly representative of the entire population in terms of age, gender, income, and all the other usual demographics.)
By asking a person to think about a range of different scenarios, the survey could observe the funding levels that were the end of each person’s tolerance. If one respondent tended to support the high-public-transport scenario except in cases where the fare increase required was more than $1, we could conclude that this repondent was saying, in effect “I support fare increases up to but not beyond $1 to fund the high-public-transport program.
The key idea of a discrete choice experiment is that instead of asking people what their highest tolerable rate would be, it observed them making the choice. Essentially, discrete choices experiments create an environment in which a stated preference can emerge as a revealed preference, something we see in a person’s choices but not something we need them to be able to explain.
The results were pretty cool. We did find a mix of funding sources that got bare majority support and were sufficient to fund the infrastructure program. We actually found a range of them, each supported by a slightly different majority group. I’ll talk more about them in another post.
Can’t you use modeling to find people’s revealed preferences? I’m not so sure about local transit, but with HSR, SNCF’s projections use calibrated values of price elasticity, travel time elasticity, value of time calculations, and induced demand expectations.
@Alon. Modeling extrapolates unknown future preferences from revealed preferences observed in the past. When we speculate about future patronage on a new high-speed rail line by citing the patronage of existing ones, we’re doing the same thing. Modeling is just a computerized aggregation of that same kind of speculation.
The elasticities that modeling uses are indeed based on revealed preference, but this does not reduce the fundamental uncertainties involved in guessing what people would do in the future based on what other people have done in the past.
It wasn’t clear to me from this point, but were you attempting some type of conjoint analysis here?
More interesting for me is whether the authorities are actually going to implement a programme based on the findings of your report?
I’ve worked a lot in ‘evidence based policy’, but in the UK at least a lot of it just seems to be “do what we want and worry about evidence later”, so I’m understandably a bit cynical.