An article in the San Francisco Examiner challenges the long held supposition that 30 percent of all traffic in a city are actually cruising looking for parking. Read the article here.
The gist is that the 30% number is an ‘average’ taken from 10 studies done over 80 years. Don Shoup says the “cruising” number is 34% but adds that it really doesn’t make any difference, since cruising causes a level of traffic is polluting and expensive. Fair enough. He adds that the 30% number is a good one to use as “It’s a harmless, very shorthand way to get across an idea that they plucked from a book,”
Let me get this straight. Did San Francisco set up a $25 million 5 year test program (SF Park) designed to reduce congestion and increase parking availability based on a “harmless very shorthand way to get across an idea that they plucked from a book.”
If we don’t have good statistics to start with, how do we know if we are making headway. What if the real number is 40% and we bring it down to 30%. We think we have done nothing, but we have. On the other had, what if the real number is 20% and we have, after investment of much time and treasure, find that it is 20% - we think we are a grand success (we assumed we started at 30%). We think we knocked the number by 1/3 but in fact we did nothing.
How does one determine if a car is cruising for a parking space or is simply driving through the neighborhood to get to some other area. Do we ask people stopped at traffic lights and how do we get a good representation?
In my deepest memory I recall that Don told me that he had grad students stand on the rooftops of an area near UCLA and track cars up and down the streets. How do you do that? I see a car and then I watch it, but what of other cars? Do I video all the streets in an area and then find each car and see how long it is in the area? It just seems extremely difficult to get a good representative number. Maybe that’s why there are only 10 studies having been done over 80 years.
I listened with interest Jay Primus’, the head of SF Park, presentation at the European Parking Association held last week in Dublin. He was questioned extensively about the nits and nats of his talk, and successfully sidestepped each question. He had a lot of good PR, but virtually no data. I feel for him, since getting the correct data is the hard part.
The first question that may be asked is “Did SF Park have good data as to traffic counts and circulation before the program went into effect?” “Assuming they did, how did they collect cruising data after the program was under way?”
SF Park relies heavily on input from in street sensors and Jay freely admits that are a “emerging technology that faces challenges.”
OK, so where are we. We have a ‘number plucked from a book.” We have occupancy data that is based on an “emerging technology that faces challenges.” We have over $25 million spent on a test program that is winding up. The citizens of SF are finding they are getting new parking meters in places where there was never a meter before based on what?
Jay says the income from parking in the city is up a bit, but that citations are down so its a wash. (Dynamic parking pricing was to solve the cruising problem by reducing prices in some areas and raising in others to achieve a 15% vacancy factor. The data is still out as to whether or not this actually worked (caused a 15% vacancy factor – that would reduce cruising since people could then easily find a space.)
The PR and Branding part of the program was second to none. Jay’s presentation in Dublin was resplendent in its discussion of how SFPark was ‘sold’ to the residents of San Francisco. Web sites are beautiful, apps are up and working. Wow!
But what really happened on the street? Do we know?
In the end, maybe Don Shoup is channeling our former secretary of state. “What difference does it make.”