by Anahid Jalali, Alexander Schindler (AIT) and Anita Zolles (BFW)

Explainable Artificial Intelligence (XAI) is gaining importance in various fields, including forestry and tree-growth modelling. However, challenges such as evaluating model interpretability, lack of transparency in some XAI methods, inconsistent terminology, and bias towards specific data types hinder its integration. This article proposes combining long short-term memories (LSTMs) with example-based explanations to enhance tree-growth models' interpretability. By generating counterfactual examples and engaging domain experts, critical features impacting outcomes can be identified. Addressing privacy protection and selecting appropriate reference models are also crucial. Overcoming these challenges will lead to more interpretable models, supporting informed decision-making in forestry and climate change mitigation.

Explainable Artificial Intelligence (XAI) is a rapidly developing field that seeks to create justifications for machine learning models' decisions. Interpretable machine learning (ML) techniques such as decision trees, linear regressions and k-nearest neighbours have long been used to model complex phenomena and their associated data. However, data's growing size and complexity have led to the development of more sophisticated methods. Consequently, recent research has shifted towards using black-box models, typically deep learning models.

Figure 1: Protecting and managing forests with AI. Image source: Karlsruhe Institute of Technology.
Figure 1: Protecting and managing forests with AI. Image source: Karlsruhe Institute of Technology.

With the increasing popularity of these large and intricate models, the demand for explainability is also on the rise. This is particularly crucial in sensitive applications such as AI for forestry and tree-growth modelling, where decisions can have significant social and environmental consequences, and the models must be interpretable to support decision-making. Therefore, there is a growing need for explainable AI to provide transparency and accountability in decision-making processes.
However, there are still challenges with the existing approaches. Multiple studies have already reviewed the drawbacks of the current methods [1,2]. To understand the challenges associated with integrating XAI approaches, it is essential to have a grasp on the principles and techniques of XAI, as well as the organisation and content of data.

  • Challenge 1. Evaluation of model interpretability: Multiple studies have approached this challenge with user-centric methods, proposing quantitative and interpretability metrics to evaluate the effectiveness of XAI methods. However, a unified metric for assessing the quality of the explanations is still an open question.
  • Challenge 2. Lack of transparency of some XAI methods: Some use complex algorithms, such as neural networks, to generate explanations. While these methods can provide accurate explanations, it may not be easy to understand how they arrive at their conclusions.
  • Challenge 3. Inconsistent terminology: Researchers and practitioners may use different terms and definitions for similar concepts, leading to confusion and misunderstanding.
  • Challenge 4. Existing bias in developing XAI methods: There is a bias towards computer vision, tabular data, and natural language processing, as well as a need for more research for appropriate XAI, approaches for other complex data such as multivariate time series, and geospatial data.
  • Challenge 5. Integration of domain knowledge into XAI methods: Domain knowledge can provide crucial insights into the underlying data-generating processes and the context in which the data is collected, which can help enhance the model's interpretability and accuracy.
  • Challenge 6. Explanation's coverage concerning the underlying black-box model: This challenge addresses the provided explanations, which should accurately reflect the underlying black-box model. In other words, the explanations should provide sufficient coverage of the model's decision-making process to enable users to understand the rationale behind the model's predictions.
  • Challenge 7. Lack of consideration for social and ethical aspects within XAI: The social and ethical aspects of XAI are often overlooked, and it is crucial to adapt privacy-preserving methods to ensure that XAI considers the necessary privacy-protection measures for mobility data.

The use of AI for forestry and tree-growth modelling has emerged as a promising tool to mitigate the impact of climate change on forests. AI algorithms can help predict the future growth of trees and their response to climate change factors using time series data that capture temperature, rainfall and carbon dioxide levels. With this information, we can identify the best tree species for planting, optimise forest-management practices, and reduce carbon emissions through sustainable forest management [3]. Moreover, explainable AI can provide transparency and accountability in decision-making, making adopting and implementing climate-friendly policies easier.

While the challenges in applying XAI to forestry and tree-growth modeling are similar to those in other fields, there are also unique hurdles that need to be addressed, such as the requirement for more data on certain aspects of tree growth, including root development and tree competition factors. In order to tackle these obstacles, our research plan for AI in tree growth involves creating appropriate models and XAI techniques.

Long short-term memories (LSTMs), a variant of recurrent neural networks, show promise as a deep learning model for multivariate time series modelling. Recent studies indicate that LSTMs effectively capture temporal dependencies and relational information in time series data. As a result, LSTMs are increasingly used to accurately model and predict complex phenomena such as tree growth.

Our research hypothesis is that using example-based explanations is more effective in explaining model decisions to end-users than gradients and heatmaps, which are more useful for ML developers. We believe that counterfactual examples, generated by manipulating attributes such as weather and soil parameters, can help understand the impact of temporal changes in the past. By observing how the model's predictions differ under alternative conditions, we can identify critical features contributing to specific outcomes and enable stakeholders to make informed decisions based on the model's explanations.

To enhance the explainability of predictive models for tree growth, we will combine LSTMs and example-based explanations. We will engage domain experts to develop and evaluate the required temporal visualisations and integrate temporal semantics. Moreover, using such explanations, we can identify errors and biases in the data, and with the feedback of domain experts, we can increase the data quality and, therefore, the model performance.

Additionally, privacy-preserving methods will be adapted to ensure that XAI accounts for the privacy protection necessary for data owners and model robustness against adversarial attacks. Selecting the appropriate reference data and models is a significant challenge in XAI. Our approach includes evaluating the effectiveness of various reference models and selecting the best ones for each specific case. By addressing these challenges, we hope to develop more interpretable models that can be trusted by both experts and end-users, leading to better-informed decisions in forestry and tree-growth modeling.

References:
[1] A. Theissler, et al., “Explainable AI for time series classification: a review, taxonomy and research directions,” IEEE Access, vol. 10, pp. 100700-100724, 2022, doi: 10.1109/ACCESS.2022.3207765.
[2] U. Schlegel, et al., “Towards a rigorous evaluation of XAI methods on time series,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea (South), 2019, pp. 4197–4201, doi: 10.1109/ICCVW.2019.00516.
[3] J. Olivar, et al., “The impact of climate and adaptative forest management on the intra-annual growth of Pinus halepensis based on long-term dendrometer recordings,” Forests, 2022; vol.13(6):935. https://doi.org/10.3390/f13060935

Please contact:
Anahid Jalali, AIT Austrian Institute of Technology, Austria
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 134
This issue in pdf

 

Image ERCIM News 134 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed