In the lead up to his appearance as an expert panelist at Smart City Expo World Congress, Stefan Webb, blogs about planning and the future of cities. As part of our Future of Planning project, Stefan explores how using data found in planning documents could help cities around the world improve the efficiency of the planning process.
Join the conversation by following @Stef_W and #FutureofPlanning on Twitter.
Cities are sitting on a wealth of valuable data locked away in planning documents. Now they need to learn how to make use of a resource that’s entirely in their control, explains Stefan Webb.
Big data, artificial intelligence, and visualisation are transforming the way that people process and interpret information. But the methods used by many cities to plan new developments creak with age and smack of desperate inefficiency. It’s time that those systems caught up with the modern world.
The processes in place within city authorities to gather information about sites, compare proposals from developers, and engage with citizens are certainly rigorous, and produce huge quantities of data at no small expense. If you’re sufficiently determined, you can find it in the appendices of local plans—and those brave enough to bother will discover reams of data, pages of tables, and an atlas-worth of maps. But as well as finding it difficult to understand, they’ll also see that it’s locked up inside PDFs that are difficult for machines to search and analyse.
In the offices of the architects and developers who bring those developments to life, though, things look a little different. There, before bricks or steel are even considered, data, models, and digital maps are used to explore sites, proposals, and plans in exquisite detail. Crucially, these organisations have come to realise the value of maintaining easily accessible data, which they can draw on quickly, easily, and repeatably.
Comparatively, the cost of generating data to support local plans is sunk when it’s dumped into a series of analogue reports and planning applications. Not only do local planning authorities have to commission new studies, time after time, to obtain the same evidence, but because it’s stored away in a PDF it can’t easily be used to inform other services.
For example, many of the datasets collected as part of a housing market assessment are the same as those which inform a community infrastructure levy, a strategic housing land availability assessment, or an infrastructure capacity assessment. But, bewilderingly, the information for these four studies is all procured separately. And any synergies or interdependencies that do occur between the four are managed by human hand—so the process is slow, with errors and loss of fidelity throughout the process.
The problem is exacerbated when different city departments decide to commission their own data-driven exercises to understand, say, the demand for school places, pressure on GP services, or where new job opportunities will be arising in the near future. Data from planning documents could easily be re-used to help provide such insights, but instead it’s gathered once more at high cost.
What’s needed, then, is for cities to hold their spatially relevant data in one place, where it can be used over and over again, not just for multiple plans but across departments. Such a system would not just provide efficiency savings by reducing the cost of updating the evidence base for local plans, but also ensure that everyone is working with the same figures and assumptions, and make it easier to build tools to access, interpret, and analyse the data.
Greater Manchester has already shown that it is possible to generate and reuse planning data in this way. Its Open Data Infrastructure Map shows key infrastructure across the entire region in one open and accessible location. But it’s gone further, too, using the same mapping platform to seek suggestions for new development sites and new automated processes to carry out parts of the shortlisting process without human intervention.
At Future Cities Catapult we have built on the work of Greater Manchester with their Infrastructure Advisory Group. Together, we have developed a working prototype of a tool that could replace infrastructure viability studies by bringing together data from utilities companies and city planners. The tool allows city planners and utilities companies to better understand where and when infrastructure networks are likely to run out of capacity, and hence allows them to plan more collaboratively and effectively.
Who pays for all this? Well, much of the evidence required for local plans is driven by national legislation, and the costs of building planning data platforms is too large to be borne by any single planning authority. So, ideally, central government should be investing in UK local authorities and companies to prototype the planning system of the future.
A city data environment that functions in this way will allow local authorities to maximise the value of the data that is generated as part of the planning process. In turn, it will reduce the time it takes to produce local plans, and make them more transparent and understandable to citizens and developers. The data is there to be used—cities just need to realise their potential in order to make use of it.