Method to Assess Coherence in BTR Adaptation Components

This report uses a reproducible, rule-based approach to assess the coherence of adaptation information in Biennial Transparency Reports (BTRs) by detecting explicit linkages between climate hazards and systems within specific elements of each document.

Each row is tagged by Element (e.g., System at risk, Adaptation priorities, Action, Result) and enriched with HazardType and SystemType labels. When tags are empty, we use regex fallbacks to capture direct mentions in the ElementText.

For every document (Country × Document × Year of Submission), we compute seven binary variables by scanning the appropriate target element for the required mentions:

  • HazardSystem → EITHER hazard terms in System at risk OR system terms in Hazard
  • HazardPriority → hazard terms in Adaptation priorities
  • HazardAction → hazard terms in Action
  • HazardResult → hazard terms in Result
  • SystemPriority → system terms in Adaptation priorities
  • SystemAction → system terms in Action
  • SystemResult → system terms in Result

Scoring rules

  • 1 — At least one row in the target element contains the required mentions (via tags or regex).
  • 0 — The target element exists in the document but contains no required mentions.
  • NA — The target element is absent from the document.

Evidence

For transparency, the per-document summary stores the first triggering snippet (EntryID • Element • excerpt) when a linkage equals 1. Row-level data keep the original ElementText and receive the document-level flags for convenience:

  • Hazard* flags are populated on Hazard rows
  • System* flags are populated on System at risk rows

Aggregated outputs

We report: 1.which countries accumulate the most linkages across their documents and 2. which linkage types are most frequent globally. This supports rapid quality checks and iterative refinement of keyword dictionaries where needed.

## Warning: package 'tidyr' was built under R version 4.4.3

Regex fallback keyword tables

Detector Regex patterns used
Hazard terms \bheat( wave|wave)?\b, excessive heat, high temperature, extreme (temperature|heat), cold (wave|spell|snap), \bsnow\b, \bice\b, \bfrost\b, freeze, severe winter
storm, tropical storm, cyclone, typhoon, hail, lightning, thunderstorm, heavy rain, windstorm, sand storm
dust storm, tornado, torrential, strong winds?, droughts?, dry (spell|days?), aridity, wild ?fire(s)?, forest fire(s)?, bush ?fire(s)?
landslides?, flood(s|ing)?, inundation, temperature (rise|increase|change|warming|drop), rising temperatures?, change in precipitation, rainfall (variability|patterns?|distribution|intensity|frequency)
salin(i[sz]ation|ity), erosion, desertification, sea[ -]?level (rise|increase), coastal (flooding|erosion|submersion|retreat|beach loss), sea (surface )?temperature, ocean temperature, ocean acidification, acidification, pest(s)?, vector-borne, disease(s)?, infestation, invasive species
System terms \bcrop(s|ping)?\b, agri(culture|cultural)?, yield, rain[- ]?fed, irrigat, farming, livestock, pasture, pastoral(ist)?, grazing, herder
fish(ery|eries)?, aquaculture, fishing, marine harvest, forest(ry)?, tree(s)?, non[- ]timber, terrestrial, drylands?, ecosystem(s)?, ecosystem services?, ecological system
desertification, land degradation, soil (degradation|erosion),

Raw data table

datatable(
  small_df,
  escape = FALSE,
  class = "compact stripe hover",
  options = list(
    pageLength = 15,
    scrollX = TRUE,
    dom = "tip"
  ),
  caption = "📄 Full linkages data (hover ElementText to see full text)"
)
## Warning in instance$preRenderHook(instance): It seems your data is too big for
## client-side DataTables. You may consider server-side processing:
## https://rstudio.github.io/DT/server.html

Linkages summary data

# (optional) push evidence columns to the end
# sum_df <- dplyr::relocate(sum_df, dplyr::any_of(ev_cols), .after = dplyr::last_col())

datatable(
  sum_df,
  escape = FALSE,                  # render HTML for tooltips
  class  = "compact stripe hover", # tighter rows
  options = list(
    pageLength = 15,
    scrollX = TRUE,
    dom = "tip"
  ),
  caption = "🏆 Linkages summary (hover Ev_* cells to see full evidence)"
)
DT::datatable(
  by_country_links,
  options = list(pageLength = 15, autoWidth = TRUE, dom = 'tip'),
  caption = "🏆 Countries with Most Linkages (sum across documents)"
)
DT::datatable(
  by_type_links,
  options = list(pageLength = 10, autoWidth = TRUE, dom = 'tip'),
  caption = "📊 Linkage Types by Frequency Across All Documents"
)

Remarks on Priorities ↔︎ Actions ↔︎ Results Linkages

We currently lack robust, curated keywords to infer semantic links between Adaptation priorities, Action, and Result elements. There are two defensible ways to operationalize these linkages:

Option A — Co-presence (simple, transparent)

Treat linkages as present if the two elements both appear anywhere in the same document.

  • PriorityAction = 1 if the document contains Adaptation priorities and Action.
  • PriorityResult = 1 if the document contains Adaptation priorities and Result (excluding Indicator).
  • ActionResult = 1 only if we find an explicit linking phrase in a Result (e.g., “led to”, “as a result of…”, “following the implementation…”).
    Rationale: emphasize reporting progress for pre-identified priorities/actions, so co-presence is a reasonable, low-assumption proxy; but for Action→Result we still want a minimal causal signal.

Pros: Easiest to explain; stable across languages; low risk of over-engineering.
Cons: Overestimates linkage strength when sections are listed but not actually connected.

Option B — “Strict” evidence (conservative, higher precision)

Require alignment cues between the two elements before flagging a linkage:

  • Shared tags: overlapping SystemType or HazardType between the two elements.
  • Linking phrases: phrases indicating implementation or tracking against priorities (e.g., “to implement the priority…”, “progress toward the priority…”) or that results arose from actions (e.g., “led to”, “as a result of…”).

Pros: Reduces false positives; closer to the spirit of “coherence.”
Cons: Sensitive to tagging gaps and language variations; may undercount true links.

Comparing practice

# Overall comparison
  DT::datatable(
    overall_cmp,
    class = "compact stripe hover",
    options = list(pageLength = 10, dom = "tip"),
    caption = "📊 Q11–Q13: Co-presence vs STRICT — overall comparison"
  )

This table compares two ways of detecting linkages among Adaptation priorities → Actions → Results across documents:

  • Co-presence (simple): flags a linkage when the two relevant elements both appear somewhere in the same document.
  • STRICT (conservative): requires alignment cues between the two elements (e.g., shared sector tags and/or reference phrases). For Action→Result, STRICT also requires an explicit linking phrase in a Result (e.g., “led to”, “as a result of”). Indicators are excluded from Results.

Columns

  • Linkage — The linkage being assessed:
    • PriorityAction (Q11): Adaptation priorities ↔︎ Actions
    • PriorityResult (Q12): Adaptation priorities ↔︎ Results
    • ActionResult (Q13): Actions ↔︎ Results
  • DocsWithLink_Co — Number of documents where the linkage is present under the Co-presence rule.
  • DocsWithLink_Strict — Number of documents where the linkage is present under the STRICT rule.
  • Share_Co_pct / Share_Strict_pct — Percent of (eligible) documents with the linkage under each rule. The denominator is the count of documents where the linkage could be evaluated (i.e., at least one of the two elements exists; rows where both elements are absent are excluded and appear as NA).
  • Delta_Docs — Difference in counts: DocsWithLink_Co − DocsWithLink_Strict.
  • Delta_pp — Difference in percentage points: Share_Co_pct − Share_Strict_pct.

Interpreting gaps

  • Large positive Delta_pp means many documents list both sections but lack explicit alignment (STRICT evidence).
  • Smaller gaps suggest stronger coherence (elements appear together and show alignment signals).

This table aggregates linkage detection by country across documents (distinct combinations of Country × Document × Year of Submission).

Columns

  • Linkage — The linkage assessed:
    • PriorityAction (Q11): Adaptation priorities ↔︎ Actions
    • PriorityResult (Q12): Adaptation priorities ↔︎ Results (Indicators excluded)
    • ActionResult (Q13): Actions ↔︎ Results
  • Country — Reporting Party.
  • Docs — Number of documents evaluated for that country (count of distinct Country–Document–Year rows).
  • Co — How many of those documents show the linkage under the Co-presence rule (elements both appear somewhere in the doc).
  • Strict — How many documents show the linkage under the STRICT rule (requires alignment cues; for Action→Result also an explicit linking phrase).
  • DeltaCo − Strict (how many documents lose the linkage when stricter evidence is required).

Interpreting the counts

  • A large positive Delta suggests the country often lists both elements but does not consistently show explicit alignment (e.g., actions/results not clearly tied to priorities, or results not clearly attributed to actions).
  • A small Delta indicates stronger coherence or more explicit reporting.

This table lists only the documents where the two linkage modes disagree for Q11–Q13:

  • Co-presence = linkage flagged when both elements appear somewhere in the document.
  • STRICT = linkage flagged only when alignment is evidenced (shared tags and/or reference phrases; for Action→Result an explicit linking phrase in a Result).

Columns

  • Country / Document / Year of Submission — Document identifier.
  • Variable — Which linkage is being compared:
    • PriorityAction (Q11), PriorityResult (Q12), ActionResult (Q13).
  • Co / Strict — Binary flags (1/0) under each rule for this document–linkage pair.
  • Ev (Co-presence) — First supporting snippet used to justify Co-presence (hover to see the full text). Format:
    "[EntryID • Element] excerpt…"
  • Ev (STRICT) — First supporting snippet used to justify STRICT (hover to see the full text). If empty, no qualifying alignment cue was found.

Interpreting rows

  • Co = 1, Strict = 0 → Elements are listed, but we did not detect explicit alignment (e.g., no sector overlap or linking phrase).
    Actionable next step: inspect Ev (Co-presence), then search the source for a clearer tie.
  • Co = 0, Strict = 1 → Should not occur (STRICT is a subset of Co-presence). If observed, check data/tagging.