Testing personalized environmental stories
Can we change how participants think about environmental action by making stories personally relevant?
π Hometown-specific content
π§ Top environmental concern (their choice)
vs.
π° Generic environmental story
N = 296 participants
Randomly assigned to:
β Treatment: Story tailored by Anthropic's API on the spot to their hometown + concern
βͺ Control: Less relevant story
(Did people actually perceive the difference?)
Relevance to their concern:
t = -7.63, p < .001
Relevance to their place:
t = -4.44, p < .001
Treatment group found stories significantly more relevant
π Learning: Quiz on story content
π Awareness: Environmental concern frequency
π Affect: Mood and climate anxiety
β Behavior: Willingness to take action
The results get interesting
One question showed strong effects:
(broken into true/false items, all significant p < .01)
Treatment group scored higher on this content
Across all learning questions?
Statistical significance β
They learned something specific, but not everything
One measure went the opposite direction
learn2_avg:
Control = 1.35
Treatment = 1.45
Difference = 0.10 on a scale
Statistically significant β
Practically meaningful ?
β Manipulation worked perfectly
β Some significant learning effects
β οΈ But: Small effect sizes
β οΈ And: Mixed/inconsistent patterns
This is proof of concept...
But how do we make it stronger?
Measurement:
Are we capturing learning appropriately?
Better scales or items?
Intervention:
Was one story too light-touch?
Wrong timing for assessment?
Sample:
Missing important moderators?
What would you change about:
π¬ The design?
π The measures?
β‘ The intervention?
We're here to learn from you!
Questions? Suggestions? Critiques?