The following is the established format for referencing this article:
Yong, S., Y. T. Maru, A. Herr, T. G. Measham and B. Loechel 2022. A method for benchmarking two different resilience assessment methods. Ecology and Society 27(4):13.ABSTRACT
Assessments of objective and subjective resilience are the main methods for understanding disaster preparedness and developing policy responses. However, the two assessment outcomes are not directly comparable. A model combining objective and subjective resilience assessments provides a better approximation of the underlying resilience. Building on previous comparisons, we have developed a robust Bayesian hierarchical model for comparing and linking these assessments. Three modeling scenarios provide the opportunity to explore the complex relationship between resilience data and explanatory variables under different conditions and prior knowledge about variables. Our strategy reasonably shares the uncertainty with the posterior distribution of all model parameters, instead of leaving all uncertainties to the variance parameter estimation, thus reducing methodological bias and providing a robust uncertainty estimation. The approach allows connecting and comparing objective and subjective resilience assessments, laying a strong foundation for developing a fully integrated resilience assessment.INTRODUCTION
As the world faces multiple and interacting perturbations, including climate change, biodiversity loss, and more recently the COVID-19 pandemic, resilience has become a rallying scientific concept. Resilience is a measure of a system’s capacity to cope with shocks and undergo change while retaining essentially the same structure and function. Resilience assessment enables evaluation of the state and trajectory of a social-ecological system and guides the building of absorptive, adaptive, and transformative capacities in regional economies (Walker et al. 2004, Walton et al. 2013, Filatova et al. 2016, Measham et al. 2019, Dogru et al. 2019, Walker 2020, Chu. et al. 2021). Resilience assessments are critical to inform interventions for building society’s resilience in the face of stresses or shocks (see, e.g., O’Connell et al. 2019, Walker 2020).
A recent review of the literature found 56 community resilience assessment frameworks with 68 measures developed for diverse purposes and different systems and contexts (Walpole et al. 2021). Although there have been significant advances, gaps still exist in resilience assessment tools; they tend to be either qualitative or quantitative, using subjective or objective methods but rarely both (Rivington et al. 2007, Suwarno et al. 2009, Mayega et al. 2015, Zhang et al. 2018, Ensor et al. 2021, Walpole et al. 2021). The differences in assessment methods make a comparative analysis of resilience outputs difficult, which may detract from the acceptability of resilience assessment outcomes (Jones et al. 2021).
Differences in focal area and scale are prevalent, leading to difficulties in comparing resilience assessment outcomes and future trajectories. Some assessments focus on social resilience, whereas others focus on ecological resilience (see, e.g., Walker and Cooper 2011, Saja et al. 2019, Xiao et al. 2020). However, resilience, as an integrating concept, deals with the social-ecological system and assumes that people, communities, economies, societies, and cultures from local to global scales are tightly coupled with and depend upon nature (see, e.g., Folke et al. 2011, Leach et al. 2012, Allen et al. 2019, Grafton et al. 2019, Pimm et al. 2019, Jones et al. 2021).
In this paper, we focus on improving the integration of two major resilience assessment approaches, namely “subjective” and “objective” resilience assessments. These approaches provide insights into different aspects of resilience and a combination of both will provide a more complete estimate of community resilience. The subjective resilience approach engages with the community to develop the resilience assessment instrument. It elicits community members’ knowledge, experience, perceptions, and evaluation of resilience. The objective resilience approach selects variables and indicators for resilience assessment in a region without direct community input, for example from census data (Higuera et al. 2019, Jones and d'Errico 2019).
Establishing a measure of resilience from both approaches becomes particularly relevant when resilience assessments guide the development of policies, interventions, and support programs (Markantoni et al. 2019, Jones et al. 2021). Here methodological difficulties arise because current resilience assessment approaches are limited in their ability to fully integrate variables collected in different ways (see, e.g., Pimm et al. 2019). A tool that can connect objective and subjective resilience assessments would integrate different dimensions, thus improving rigor and acceptance of resilience measurements.
The resilience assessment model presented in this paper builds on recent work by Jones and d'Errico (2019), who have made significant progress in directly comparing objective and subjective resilience assessment results. They presented two measures, the Resilience Index Measurement Analysis (RIMA) and the Subjective self-Evaluated Resilience Score (SERS), for objective and subjective assessment of resilience, respectively, and both measures are a response to the same shocks and socioeconomic drivers (FAO 2016, Jones and d'Errico 2019).
Jones and d'Errico (2019) used the ordinary least squared method to estimate the parameter and variances of the two models and presented a comparison of the correlations between different variables within the SERS and RIMA models. Their work was seminal in providing a comparative view of these methodologically different resilience assessments. However, the comparison estimates the two models separately, which leads to inconsistent estimation of the combined model variability and does introduce bias into the parameter estimation of explanatory variables. Consequently, the unexplainable parts in the two evaluations are identical once the explanatory variables are connected. However, this connection might not be attainable in some circumstances. Their least squared estimation method only gives an optimized single point estimate for each parameter, which is not able to represent the uncertainties related to the correlation parameters. If the distributions of the correlation parameters are not well estimated, variance parameter estimates would contain additional errors, and this further affects the uncertainty estimation. In summary, if the two models are estimated separately, the complex correlations within and across the different explanatory variables are not comparable for these two resilience assessments.
Our approach addresses the problem from a Bayesian modeling perspective. This improves the limitations of direct comparison in that we couple these two assessments to obtain linked results. Specifically, this paper focuses on how to compare and connect “subjective” and “objective” resilience assessments using a Bayesian hierarchical modeling (BHM) framework. A Markov Chain Monte Carlo (MCMC) sampling algorithm provides the analytical solution for the posterior distributions via simulation (Gelman et al. 2014). At the MCMC estimation stage, the two models are connected by a common link parameter through conditional independent distribution. Therefore, errors resulting from the separation in the estimation are greatly reduced. In addition, we considered different modeling scenarios that allow the unexplainable parts of the resilience assessment to differ.
This BHM framework allows modeling of complex correlations within and across the different explanatory variables and so allows us to account directly for uncertainties from the data, the model, and the parameter estimation. Our coupled approach enables researchers to understand the role of fixed effects in the assessments (i.e., the selected explanatory variables explain only part of the resilience assessment variance) because it determines the remaining variance as a function of the underlying process or other unknown fixed effects, including spatial variance. For example, even within a small area the explainable resilience between different households may vary. A combined approach is advantageous because it (1) helps in understanding how subjective and objective resilience assessments compare, (2) addresses the influence of correlation between explanatory variables, and (3) enables the quantification of other known or unknown fixed effects and underlying processes.
In simple terms, the model we are presenting provides a more robust way of comparing similarities and differences of objective and subjective resilience assessments than models currently provided in the literature. Our approach to linking the two resilience assessments also explicitly addresses uncertainties in a way that conventional models are unable to provide. This paper lays a foundation for an integrated objective and subjective resilience assessment model, which we currently have under development. This integration will provide even more accurate results with improved precision. Overall, this better estimation of the underlying resilience of a system (e.g., households, communities, regions) provides robust insights that inform policy and practice responses.
METHODS
Our approach implements a coupled modeling strategy to analyze the relationship between SERS and RIMA by using BHM. This type of Bayesian modeling provides a natural modeling framework for combining data sets and accounting for uncertainties (Wikle 2010). It describes the two resilience assessments in the same explanatory linkage parameter. This link affords a “statistical communication” between resilience assessments during the estimation process using conditional independency. It reduces the bias found in the ordinary least squared estimation and provides uncertainty estimation improvements.
The existing SERS and RIMA comparison approach, which uses ordinary least square regression, means that the fixed effect contribution to the combined RIMA and SERS resilience may not be known. We estimate the model parameters through the MCMC method, which links SERS and RIMA in the estimation process through conditional independence. Our model improvements remove the constant area fixed effect bias. This overcomes the issue of not knowing the size of the fixed effect. The deviations associated with the measurement errors from SERS and RIMA can be added to the estimates of related parameters and variance. This overcomes the limitation of least squared estimation, which only gives an optimized single-point estimate for each parameter. Our improvement provides both a parameter and a variance uncertainty estimation of the combined resilience.
Here, we provide details aligned with the framework of Jones and d'Errico (2019), and we use the same explanatory variables. Later we provide extended Bayesian modeling details in Appendix 1. In our BHM model, the general model framework for SERS and RIMA is:
(1) |
(2) |
PROChc is a process variable that represents a long-term mean (area fixed effect) or an underlying fixed resilience pattern such as the area effect. PROChc is a dynamic process, which could also change with time, if the resilience became available as a time series. The subscript h indicates household, and subscript c stands for county, representing area effect. The coefficient parameters αs and αR for SERShc and RIMAhc account for the measurement error and the bias linked to the fixed area effect and are conditional on both SERS and RIMA being dependent on the process variable PROChc. The variable PROChc functions as a linkage, which connects both data models in the estimation process, thus forming a coupled modeling system. PROChc becomes a dynamic connection between the estimation of αs and αR by updating during each MCMC iteration, and the MCMC convergence mitigates the biases related to the measurement error and the fixed area.
SHOCKhc is a vector of variables for a series of defined shocks that integrates the correlation between resilience, climate effects, and impacts. DRIVERhc is a vector of variables for socioeconomic drivers of household resilience. Both are explanatory variables in our model.
The inference stage of the Bayesian model estimates the SERS parameter matrices αs, βS1, βS2, (Equation 1) and RIMA parameter matrices αR, βR1, βR2, (Equation 2), with εR, εS being the covariance matrices for SERS and RIMA, respectively. This leads to a realistic and consistent estimation of the parameter matrices of the explanatory variables SHOCKhc and DRIVERhc and results in improved correlation coefficients between SERS and RIMA variables.
We develop three scenarios of PROChc estimation. Scenario 1 treats PROChc as an unknown process variable, Scenario 2 has PROChc as a known fixed effect, and Scenario 3 treats PROChc as an unknown fixed effect.
In the first scenario, the variable PROChc is an unknown process variable. Under this condition PROChc is a common latent process variable in both assessments, which the explanatory variables SHOCKhc and DRIVERhc do not explain. We assume that resilience researchers understand the underlying process for both SERShc and RIMAhc, and that both are of the same magnitude. To demonstrate the model capability and to be consistent with the data type in Jones and d'Errico (2019), we assume the αs and αR are fixed to 1. Thus, the model is simplified from Equations 1 and 2 as:
(3) |
Here, PROChc is the linkage variable between both data models. Thus, the conditional distribution of combined resilience assessments model is:
(4) |
On the basis of the conditional distribution and SERS’s and RIMA’s conditional independence on the linkage variable PROChc, and the full conditional distributions of parameters within the Bayesian modeling framework, MCMC can sample the parameters PROChc, βS, εS, βR, and εR.
In the second scenario, the variable PROChc is a known fixed effect. Jones and d'Errico (2019) discuss multilevel models with fixed area effects and robust standard errors, where households are nested in sub-countries and regions in the model. They tested models with area effects and with area effects removed. We can incorporate sub-country fixed effect with PROChc, which can either be a fixed area effect for each household or a known variable for each household (e.g., farm size or number of animals).
Under this condition, we can assume αS and αR to be equal and that there is one linkage parameter α, which links both models. Therefore, the model simplifies from Equations 1 and 2 to:
(5) |
In this context, the conditional distribution of the combined resilience assessments model is:
(6) |
On the basis of the conditional distribution and SERS’s and RIMA’s conditional independence, MCMC sampling provides the full conditional distributions of the parameters βS, εS, βR, and εR.
In the third scenario, where researchers do not have confidence or prior knowledge of any fixed effects that affect resilience, PROChc is an unknown fixed effect or latent process. In this scenario, all the parameters are unknown. Consequently, αS and αR are two scale parameters that monitor the magnitude of the latent process on the resilience assessment. The variable PROChc is the linkage variable in this modeling scenario with which to rewrite Equations 1 and 2 as the following:
(7) |
The variable PROChc and parameters αS and αR are in need of estimation. Knowledge of the magnitude of αS and αR will be helpful to understand the connection between SERShc and RIMAhc. The conditional distribution of a combined resilience assessments model becomes:
(8) |
On the basis of the conditional distribution and SERS’s and RIMA’s conditional independence, MCMC can estimate the full conditional distributions of the parameters PROChc, αs, αR,
In the first and third scenarios, the modeling estimates of SERShc and RIMAhc are conditionally explanatory when linked to PROChc, and in the second scenario model estimates are conditioned on the linkage parameter α (Song et al. 2014).
Model test with synthetic data
The purpose of synthetic data testing is to establish model credibility, estimate unknown SERS and RIMA scores, and calculate the correlation between the scores in different modeling scenarios. We demonstrate the validity of our model by using randomly generated synthetic data. We created three sets of data to simulate our three modeling scenarios. For comparison, we construct the data on the basis of the data structure of Jones and d'Errico (2019), using Equations 3, 5, and 7, respectively. The socioeconomic driver and shock variables are the same as those outlined in Jones and d'Errico (2019). The randomly generated vectors of shocks and socioeconomic drivers for each household are based on a multivariate normal distribution. To test for robustness, the mean of the multivariate normal distribution is different in the random vectors of shocks and socioeconomic drivers for each modeling scenario. In the second modeling scenario, the PROChc variable is from randomly generated values and we adopt the estimated values of the parameter vector estimation of the socioeconomic drivers from Jones and d'Errico (2019). We also randomly create SERShc and RIMAhc resilience score vectors for each household with the same length as the total number of representative household surveys. Figure 1 shows the distribution histograms of the modeled SERS and RIMA values.
In the second scenario the magnitude difference in PROChc represents fixed effects. We adjust the spread of SERS and RIMA ranges to account for this. Although the moments of multivariate distributions are different, the parameter vectors of the socioeconomic drivers in the synthetic data are the same. When using predetermined model parameters within the variance-related uncertainty range, the values of the predetermined model parameters should be close to the median value of the posterior distribution of the estimated model parameter. This allows for a model validity verification, because the model can retrieve the values of predetermined model parameters within the uncertainty range related to the variance. However, this proximity to the median value will also depend on the magnitude of the uncertainty.
The MCMC simulation for model testing was run for 10,000 iterations with a “burn-in” period of 5000 iterations. All parameters and variables are drawn from the posterior samples after the burn-in period. To complete the BHM framework, we need to specify the distributions of parameters from the previous stages. For simplicity, we consider conjugate priors for the variance parameters σ across all these models using the gamma distribution, with σS~gamma(qS,rS) and σR~gamma(qR,rR), where qS, qR, rS, and rR are hyperparameters.
The quality of the model performance is not overly sensitive to the choice of the values of hyperparameters. For all the other variables in the first and second scenarios, we chose flat (i.e., noninformative) priors, which have flat probability over the entire real number distribution. When using prior information from the data, we find that the parameter posterior estimations are robust for the proposed model under three data scenarios.
For the third scenario, where all model components (i.e., variables and parameters) are unknown, it is difficult to estimate the components when there is collinearity between parameters. This can lead to identification problems of estimates in the posterior distribution. We have applied standard normal distribution priors for both αS and αR. Because both are scale parameters, their values are relatively small. The normal priors N(0.5,0.3) we use limit the range of the scale parameters but still enable them to vary. The parameter values of the PROChc priors depend on the distribution ranges of SERShc and RIMAhc, which are relatively small. This justifies using PROChc prior from the normal distribution with N(1,1).
We selected and tested hyperparameters of our prior distribution to ensure they are not sensitive to the model performance. We used visual inspection and diagnostic checks to verify simulation convergence (Gelman and Rubin 1992).
Bayesian statistics estimates the posterior distributions from input data using MCMC sampling and establishes parameter values from sampling the posterior distribution. The MCMC method includes a repeated sampling procedure that matches its stationary distribution with the posterior distribution. Sampling repetitions are sufficient once convergence (i.e., matching) occurs. Once the Markov Chain is irreducible, aperiodic, and transient, the convergence is guaranteed. We tested the model using different data vector lengths, because with a robust model design and when the normal distribution assumption holds, uncertainties should decrease with the increase of the length of the data. Decreasing uncertainty with an increase in data points suggests that the variance of the data follows the normal distribution assumption (Gelman et al. 2014).
RESULTS
The trace plots in Figure 2 indicate a good convergence of the MCMC algorithm in the first modeling scenario. The plots show that the pre-set values of PROChc, σ2S, and σ2R are close to the mean of the posterior distribution, which suggests the parameters are reasonably estimated.
Figure 3 shows the posterior distributions of the socioeconomic drivers and it indicates that the model can easily distinguish the attributes of different correlations. This provides confidence in establishing the correlation between SERS and RIMA assessments, and so the issue of confounding between-assessment correlations with within-assessment variable correlations does not occur.
The second scenario model performs as well as the first scenario model, although the data structure is more complex. The second scenario handles fixed effects on each household through the linkage parameter. The main effect may contain relatively extreme values, and the household randomness will result in additional uncertainty in the correlation estimation. Here we assume that PROChc follows a uniform random distribution, which the scatter plot of PROChc across all the households confirms (Fig. 4). The parameter distributions in Figure 5 show that MCMC convergence occurred. Although there is a slight shift of the PROChc mean from the pre-set variances, the magnitude of the shift is negligible. This shift is a result of the randomness in the prior distribution of PROChc and the variance we used to create the random socioeconomic drivers and shocks. The pre-set value of the estimated socioeconomic driver vectors is generally close to the median of the posterior distributions (Fig. 6).
In the third scenario, where all model components (i.e., variables and parameters) are unknown, it is difficult to estimate the components when there is collinearity between parameters. This can lead to identification problems of estimates in the posterior distribution. On the basis of the trace plots, PROChc, αs, and αr converged well and the histograms show that the mean of posterior distributions are close to the pre-set values (Fig. 7) and the posterior distributions are bell shaped (Fig. 8), which indicates the variance estimates are reasonable. The mean of the posterior distributions after burn-in periods are 0.363, 0.675, and 1.34 for αs, αr, and PROChc, respectively, which are close to the pre-set values. The model represents the unexplained part of the explanatory variables, indicating that it can estimate the correlation within the socioeconomic drivers and the shocks. The model can retrieve the complex correlation even with no prior knowledge of the data, as the posterior distribution of the estimated socioeconomic drivers shows (Fig. 9).
It should be noted that the intent of the quantile plots of the correlation coefficient are not for comparison with point estimates. We created synthetic data using the values of correlation coefficients from the regression model outputs provided in Jones and d'Errico (2019). In Figures 3, 6, and 9, we show the magnitude with which our values (created using the posterior distribution of the correlation coefficient) align with their values and indicate that our three scenario models suitably represent their correlations.
In summary, these results indicate that it is beneficial to apply the BHM for modeling resilience. The posterior distribution plots of driver vectors from all three modeling scenarios show that the parameter vector estimation is reasonable on the basis of its accuracy and spread. Given the number of driver parameters, the results show robustness of the model. In addition, better estimation of posterior distributions of the correlation parameter can lead to better estimation of the variance parameter. Because of the connection through the link parameters, this modeling strategy can reasonably share the uncertainty with the posterior distribution of all model parameters, instead of leaving all the uncertainties to the variance parameter estimation of an ordinary least squared regression model. This leads to better uncertainty estimates and improved correlation parameter estimates. The three modeling scenarios also address several assumptions for the unexplained parts of the resilience, thus increasing modeling flexibility.
DISCUSSION AND CONCLUSION
The modeling approach proposed in this paper provides an advanced way to compare and link subjective and objective approaches to resilience assessment in a coupled Bayesian hierarchal model. We provide the modeling results using synthetic data, which were generated with the estimated parameter values from Jones and d'Errico (2019). We have improved the modeling approach by using a linkage variable between the SERS and RIMA models, which the ordinary least squared regression was not able to do. The linkage variable serves as the model connection and represents embedded pattern through the posterior distribution.
We demonstrated that by using a Bayesian hierarchical linkage approach, we can effectively retrieve the correct parameter values that are used to create the synthetic data under three different modeling scenarios. The pre-set parameter values fall within the posterior distribution range of the parameter estimation for all three of our modeling scenarios (Figs. 2, 5, 7, and 8). The findings demonstrated that the model is effective in estimating the distribution of parameters of those variables, given the complex correlations between the two assessments and within the model variables (socioeconomic drivers and shocks). In addition, the model provides reasonable uncertainty estimation. The model shows its ability to accurately retrieve pre-set parameter values in the posterior mean to reduce modeling bias, which also proves the accuracy of the estimation.
Our testing, using three different scenarios, indicates that the BHM approach performs consistently well. More importantly, the posterior distribution of the error terms and other parameters provides the measurement of the multiple sources of uncertainties. This shows our approach is flexible and highly effective in handling resilience assessment comparison. Moreover, these three modeling scenarios provide the opportunity to explore the complex relationship between resilience data and explanatory variables under different conditions and prior knowledge (or lack thereof) about variables.
The results also illustrated that the model could satisfactorily estimate (1) the correlation between the subjective and objective resilience assessments approaches, (2) the relationship of the resilience assessments with the socioeconomic drivers and shocks, and (3) the correlation between components of the socioeconomic drivers and shocks. Although there are complex correlations within the components of the drivers and shocks, the model did provide effective estimation of the parameter values and their precision. In comparison to the model proposed in Jones and d'Errico (2019), the Bayesian hierarchical linkage model can provide an improved uncertainty estimation that is not possible with other models using least squared estimates.
Most importantly, the Bayesian hierarchical linkage approach connects these two separate models at the estimation stage by using the MCMC algorithm for the conditional distribution (e.g., Gelman et al. 2014). This estimates the latent process of the scale parameters, which may come from fixed effects such as farm size or number of livestock. The implementation at the analytical level and the computations are relatively efficient.
Although our approach using BHM has the potential to expand the statistical analysis of resilience to temporally and spatially explicit analyses, the main effort in undertaking a resilience assessment lies in the development of survey instruments and in-field data collection. The development of our approach is a stepping stone for those implementing the integration of subjective and objective resilience assessment methods in data collection. In addition, it provides insights for other methods describing resilience, if assessment variables are sufficiently similar, and it has the potential for expansion into a multilayer model working at different spatial and temporal scales: an issue commonly found in survey and census data (see, e.g., Herr 2007). One way to represent regional differences, for example, is to have a spatial covariate structure in model errors and use differential spatial covariate structures that will allow adjustments to complex spatial correlations (Song et al. 2007). This will enable robust resilience assessments, which are needed to increase the acceptance and use of resilience as a framework in governance and policy development. Developing resilience-focused practices, policies, and programs to adapt and respond to adverse events will better prepare society for multiple stresses and shocks associated with rapid change and climate change–related extreme events (see, e.g., O’Connell et al. 2019).
Both objective and subjective resilience assessments are important means of assessing community resilience. However, to date, resilience assessments have tended to be either qualitative or quantitative (Walton et al. 2013, Mayega et al. 2015, Zhang et al. 2018, Measham et al. 2019, Ensor et al. 2021, Walpole et al. 2021). Estimating resilience with different types of assessments within a linked modeling framework improves robustness because it enables a more precise estimation of different resilience dimensions. This reduces collinearity and the estimation process accommodates the mutual correlations between or within spatial scales, which would not be possible using two separate models. We built on and expanded previous research toward integrated resilience assessment conducted by Jones and d'Errico (2019), who compared separate objective and subjective resilience assessment models. Our Bayesian hierarchal approach advances this to multiple modeling scenarios, and it allows simulation of both objective and subjective resilience assessments within a single framework that also accounts for several sources of uncertainty. Our model enables researchers to analyze resilience assessments with a relatively simple and sound framework. We developed the model with resilience to drought in mind; however, it is relevant to a wide range of contexts where an integrated subjective and objective assessment is useful, including disaster preparedness and other climate impacts, such as heat stress, flooding, and related industry change (Dogru et al. 2019, Measham et al. 2019, Chu et al. 2021). It can provide preliminary insights into resilience on the basis of the correlation estimation, which is an economical and feasible way to obtain basic knowledge of community resilience.
Our current approach links subjective and objective resilience assessments, which has some limitations. Although the linkage variable serves as the connection between the two assessments and enables the representation of embedded patterns through the posterior distribution of the linkage variable, it does not provide a full integration of the two resilience assessments. However, the BHM approach provides us with a strong foundation for developing a model that fully integrates subjective and objective resilience assessments.
RESPONSES TO THIS ARTICLE
Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.ACKNOWLEDGMENTS
We would like to thank two anonymous reviewers for their comments on the paper, which helped improve clarity and context. This work is part of the CSIRO Drought Resilience Mission (https://www.csiro.au/en/about/challenges-missions/Drought-Resilience). The CRC TiME partly supported Tom Measham’s contribution to the paper. The support of the Australian Government through the Cooperative Research Centre Program is acknowledged.
DATA AVAILABILITY
Data are synthetic and derived from Jones and D'Errico 2019. Our article provides all details (method and equations) to enable a repeat of the analysis in publicly available software such as R (www.r-project.org). We provide the synthetic data and Matlab analysis code on https://data.mendeley.com. The direct link to the data is here: http://dx.doi.org/10.17632/8822bxf5n6.1.
LITERATURE CITED
Allen, C. R., D. G. Angeler, B. C. Chaffin, D. Twidwell, and A. Garmestani. 2019. Resilience reconciled. Nature Sustainability 2:898-900. https://doi.org/10.1038/s41893-019-0401-4
Chu, S. H. Y., S.-Y. Tan, and L. Mortsch. 2021. Social resilience to flooding in Vancouver: the issue of scale. Environmental Hazards 20(4):400-415. https://doi.org/10.1080/17477891.2020.1834345
Dogru, T., E. A. Marchio, U. Bulut, and C. Suess. 2019. Climate change: vulnerability and resilience of tourism and the entire economy. Tourism Management 72:292-305. https://doi.org/10.1016/j.tourman.2018.12.010
Ensor, J. E., T. Mohan, J. Forrester, U. K. Khisa, T. Karim, and P. Howley. 2021. Opening space for equity and justice in resilience: a subjective approach to household resilience assessment. Global Environmental Change 68:102251. https://doi.org/10.1016/j.gloenvcha.2021.102251
Filatova, T., J. G. Polhill, and S. van Ewijk. 2016. Regime shifts in coupled socio-environmental systems: review of modelling challenges and approaches. Environmental Modelling & Software 75:333-347. https://doi.org/10.1016/j.envsoft.2015.04.003
Folke, C., Å. Jansson, J. Rockström, P. Olsson, S. R. Carpenter, F. S. Chapin III, A.-S. Crépin, G. Daily, K. Danell, J. Ebbesson, T. Elmquist, V. Galaz, F. Moberg, M. Nilsson, H. Österblom, E. Ostrom, Å. Persson, G. Peterson, S. Polasky, W. Steffen, B. Walker, and F. Westley. 2011. Reconnecting to the biosphere. AMBIO. 40:719. https://doi.org/10.1007/s13280-011-0184-y
Food and Agriculture Organization of the United Nations (FAO). 2016. Resilience index measurement and analysis-II (RIMA-II). FAO, Rome, Italy. https://www.fao.org/publications/card/en/c/f86d84f6-def3-46ec-a5da-4ce312f3af7f/
Gelman, A., and D. B. Rubin. 1992. Inference from iterative simulation using multiple sequences. Statistical Science 7(4):457-472. https://doi.org/10.1214/ss/1177011136
Gelman, A., J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. 2013. Bayesian data analysis. Third edition. Chapman & Hall/CRC Press, New York, New York, USA. http://www.stat.columbia.edu/~gelman/book/BDA3.pdf
Grafton, R. Q., L. Doyen, C. Béné, E. Borgomeo, K. Brooks, L. Chu, G. S. Cumming, J. Dixon, S. Dovers, D. Garrick, et al. 2019. Realizing resilience for decision-making. Nature Sustainability 2(10):907-913. https://doi.org/10.1038/s41893-019-0376-1
Herr, A. 2007. Data integration issues in research supporting sustainable natural resource management. Geographical Research 45(4):376-386. https://doi.org/10.1111/j.1745-5871.2007.00476.x
Higuera, P. E., A. L. Metcalf, C. Miller, B. Buma, D. B. McWethy, E. C. Metcalf, Z. Ratajczak, C. R. Nelson, B. C. Chaffin, R. C. Stedman, S. McCaffrey, T. Schoennagel, B. J. Harvey, S. M. Hood, C. A. Schultz, A. E. Black, D. Campbell, J. H. Haggerty, R. E. Keane, M. A. Krawchuk, J. C. Kulig, R. Rafferty, and A. Virapongse. 2019. Integrating subjective and objective dimensions of resilience in fire-prone landscapes. BioScience 69(5):379-388. https://doi.org/10.1093/biosci/biz030
Jones, L., M. A. Constas, N. Matthews, and S. Verkaart. 2021. Advancing resilience measurement. Nature Sustainability 4:288-289. https://doi.org/10.1038/s41893-020-00642-x
Jones, L., and M. d'Errico. 2019. Whose resilience matters? Like-for-like comparison of objective and subjective evaluations of resilience. World Development 124:104632. https://doi.org/10.1016/j.worlddev.2019.104632
Leach, M., J. Rockström, P. Raskin, I. Scoones, A. C. Stirling, A. Smith, J. Thompson, E. Millstone, A. Ely, E. Arond, C. Folke, and P. Olsson. 2012. Transforming innovation for sustainability. Ecology and Society 17(2):11. http://dx.doi.org/10.5751/ES-04933-170211
Markantoni, M., A. A. Steiner, and J. E. Meador. 2019. Can community interventions change resilience? Fostering perceptions of individual and community resilience in rural places. Community Development 50(2):238-255. https://doi.org/10.1080/15575330.2018.1563555
Mayega, R. W., N. Tumuhamye, L. Atuyambe, D. Okello, G. Bua, J. Ssentongo, and W. Bazeyo. 2015. Qualitative assessment of resilience to the effects of climate variability in the three communities in Uganda. Eastern Africa Resilience Innovation Lab, Kampala University, Kampala, Uganda. https://www.ranlab.org/wp-content/uploads/2013/11/RAN_EA-RILab-Uganda-Community-Consultations-Report-Climate.pdf
Measham, T. G., A. Walton, P. Graham, and D. A. Fleming-Muñoz. 2019. Living with resource booms and busts: employment scenarios and resilience to unconventional gas cyclical effects in Australia. Energy Research & Social Science 56:101221. https://doi.org/10.1016/j.erss.2019.101221
O’Connell, D., Y. Maru, N. Grigg, B. Walker, N. Abel, R. Wise, A. Cowie, J. Butler, S. Stone-Jovicich, M. Stafford-Smith, et al. 2019. The resilience adaptation pathways and transformation approach (RAPTA): a guide for designing, implementing and assessing interventions for sustainable futures. Version 2. Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australia. https://acfid.asn.au/sites/site.acfid/files/19-00418_LW_REPORT_RAPTAGuide_WEB_190829.pdf
Pimm, S. L., I. Donohue, J. M. Montoya, and M. Loreau. 2019. Measuring resilience is essential to understand it. Nature Sustainability 2:895-897. https://doi.org/10.1038/s41893-019-0399-7
Rivington, M., K. B. Matthews, G. Bellocchi, K. Buchan, C. O. Stöckle, and M. Donatelli. 2007. An integrated assessment approach to conduct analyses of climate change impacts on whole-farm systems. Environmental Modelling & Software 22(2):202-210. https://doi.org/10.1016/j.envsoft.2005.07.018
Saja, A. M. A., A. Goonetilleke, M. Teo, and A. M. Ziyath. 2019. A critical review of social resilience assessment frameworks in disaster management. International Journal of Disaster Risk Reduction 35:101096. https://doi.org/10.1016/j.ijdrr.2019.101096
Song, Y., Y. Li, B. Bates, and C. K. Wikle. 2014. A Bayesian hierarchical downscaling model for south-west Western Australia rainfall. Journal of the Royal Statistical Society: Series C (Applied Statistics) 63(5):715-736. https://doi.org/10.1111/rssc.12055
Song, Y., C. K. Wikle, C. J. Anderson, and S. A. Lack. 2007. Bayesian estimation of stochastic parameterizations in a numerical weather forecasting model. Monthly Weather Review 135(12):4045-4059. https://doi.org/10.1175/2007MWR1928.1
Suwarno, A., A. A. Nawir, Julmansyah, and Kurniawan. 2009. Participatory modelling to improve partnership schemes for future community-based forest management in Sumbawa district, Indonesia. Environmental Modelling & Software 24(12):1402-1410. https://doi.org/10.1016/j.envsoft.2009.07.001
Walker, B. 2020. Resilience: what it is and is not. Ecology and Society 25(2):11. https://doi.org/10.5751/ES-11647-250211
Walker, B., C. S. Holling, S. R. Carpenter, and A. Kinzig. 2004. Resilience, adaptability and transformability in social-ecological systems. Ecology and Society 9(2):5. https://doi.org/10.5751/ES-00650-090205
Walker, J., and M. Cooper. 2011. Genealogies of resilience: from systems ecology to the political economy of crisis adaptation. Security Dialogue 42(2):143-160. https://doi.org/10.1177/0967010611399616
Walpole, E., J. Loerzel, and M. Dillard. 2021. A review of community resilience frameworks and assessment tools: an annotated bibliography. National Institute of Standards and Technology, Gaithersburg, Maryland, USA. https://doi.org/10.6028/NIST.TN.2172
Walton, A. M., R. McCrea, R. Leonard, and R. Williams. 2013. Resilience in a changing community landscape of coal seam gas: Chinchilla in Southern Queensland. Journal of Economic and Social Policy 15(3):4-28. https://search.informit.org/doi/10.3316/ielapa.794512449998862
Wikle, C. K. 2010. Hierarchical modelling with spatial data. Pages 89-106 in A. E. Gelfand, P. J. Diggle, P. Guttorp, and M. Fuentes, editors. Handbook of spatial statistics. CRC, Boca Raton, Florida, USA. https://doi.org/10.1201/9781420072884
Xiao, W., X. Lv, Y. Zhao, H. Sun, and J. Li. 2020. Ecological resilience assessment of an arid coal mining area using index of entropy and linear weighted analysis: a case study of Shendong coalfield, China. Ecological Indicators 109:105843. https://doi.org/10.1016/j.ecolind.2019.105843
Zhang, H., H. Yuan, G. Li, and Y. Lin. 2018. Quantitative resilience assessment under a tri-stage framework for power systems. Energies 11(6):1427. https://doi.org/10.3390/en11061427