Three priorities to unleash the potential of algorithmic forest carbon data for the VCM

On the margins of the recent North American Carbon World 2024, held in San Francisco, Chloris Geospatial and Pachama convened a roundtable discussion on algorithmic forest carbon data for the Voluntary Carbon Market (VCM). The meeting brought together a diverse group of stakeholders, including project developers, buyers, financiers, investors, philanthropic foundations, rating agencies, and other market intermediaries. 

During the meeting, participants discussed opportunities that algorithmic carbon data - specifically those derived from direct estimations of biomass stock and change - can unlock for carbon markets. We have written about those previously, and we invite you to read up on them in recent reports published by Chloris and Pachama.

The meeting also identified specific opportunities for innovation and collaboration to accelerate adoption of trustworthy algorithmic carbon data in support of the VCM. 

In this blog, we highlight 3 action areas that were discussed during the meeting. 

  1. Policy change: Today, VCM standards typically request developers to follow a specific process (“methodology”). While many methodologies strongly rely on field data and activity data, an increasing number of standards and methodologies are starting to accept algorithmic / direct biomass data (ERS and Verra’s VM00047 are good examples). During the meeting in San Francisco, there was a shared sentiment in the room that there is an opportunity for standards to catch-up with technology innovations more broadly. And more generally speaking, there seems to be a need for a well-designed policy evolution to standardize carbon accounting. Rather than prescribing and validating the process for getting to the required carbon numbers, VCM standards and registries could focus more on validating the actual number. 

  2. Shared data validation conventions: The accuracy of estimates matters. But because every biomass number is an estimate - also those derived from field data - the first question to ask when confronted with accuracy assessments is: compared to what? To infuse trust and foster comparability of the “safety” and quality of algorithmic carbon data, there is a need for a high-quality, well-designed data validation infrastructure, made up of a significant number of field plots, implemented across biomes and ecosystem types. To ensure validation over time of both stock and change estimates, such a network would require a long-term commitment from funders and implementation partners. It would also need an appropriate governance structure to ensure regular maintenance and re-measurement of permanent field plots. By providing a common measuring stick, such a network could help standardize the accounting, but not stifle innovation in methods, which is key for the market. It could also be accompanied with a quality label for data - similar to CCB but for data quality - that meets the requirements set by such a validation standard. 

  3. Communicate and address technological limitations: There is no silver bullet, and every technology has its limitations. For example, today, it is challenging to reliably and directly monitor early-stage tree growth from space. Instead of monitoring biomass change from space, more traditional approaches can therefore complement the monitoring during the first 1-5 years of restoration projects (e.g. field visits, visual interpretation of high-resolution images). That said, it is expected that continued improvements in sensor quality will further enhance data quality, data diversity, and cost-effectiveness of algorithmic carbon data, also for early-stage growth monitoring. By fostering collaboration and dialogue, developers, technology providers and standards can help create a shared understanding of what is possible today and which technological innovations are most effective to pursue for the benefit of the sector. 

We held the meeting at the very beginning of the North American Carbon World. It was heartening to see how the discussion echoed in the corridors and lounges throughout the conference. It seemed like the topic resonates with many, and it was heartening to see the commitment of many to building a stronger, more credible, more effective carbon market where trustworthy, algorithmic carbon data plays its role. 


Previous
Previous

Chloris Geospatial and Pond Foundation: empowering forest conservation through data, innovation, and love

Next
Next

Five tests to assess the viability and integrity of forest carbon digital MRV solutions