Hero image for Terraform State Management: The Hidden Complexity That Breaks Teams

Terraform State Management: The Hidden Complexity That Breaks Teams


Your first Terraform apply worked perfectly. The second one, run by your colleague, just destroyed the production database. The culprit? A state file conflict that nobody saw coming.

This scenario plays out in organizations every week. Two engineers run terraform apply simultaneously. One overwrites the other’s changes. Resources drift. State locks fail silently. The infrastructure you think exists bears no resemblance to what’s actually running in AWS.

State management is where Terraform’s elegant simplicity collides with the messy reality of team collaboration. The tool that promised to make infrastructure reproducible and version-controlled becomes a source of anxiety, tribal knowledge, and 3 AM incident calls.

The problem isn’t Terraform itself—it’s that most teams treat state files as an afterthought. They start with local state because it’s easy. They migrate to S3 when things break. They add DynamoDB locking after the first corruption incident. Each fix addresses yesterday’s disaster while setting up tomorrow’s.

What separates teams that scale Terraform successfully from those drowning in state-related incidents isn’t luck or tooling budget. It’s understanding that state management is the core architectural decision, not a configuration detail. Get it wrong, and every terraform plan becomes a game of Russian roulette with your production environment.

The patterns that work for a solo developer experimenting in a sandbox account will actively sabotage a platform team managing hundreds of resources across multiple environments. Recognizing this gap—and knowing which patterns apply at which scale—determines whether Terraform becomes your competitive advantage or your biggest operational liability.

Before diving into state management strategies, we need to understand what makes Terraform fundamentally different from the scripts and tools you’ve used before.

Why Infrastructure as Code Isn’t Just ‘Scripts in Git’

Every team’s journey into Infrastructure as Code starts the same way: someone commits a shell script that provisions an EC2 instance, calls it “infrastructure as code,” and moves on. Six months later, that script has grown into 2,000 lines of bash, nobody knows what state the production environment is actually in, and a junior engineer just ran the script twice because it “looked like it failed.”

Visual: declarative vs procedural infrastructure management

This is the configuration drift problem, and it’s why treating IaC as “scripts in version control” fundamentally misses the point.

The Declarative Paradigm Shift

Traditional infrastructure automation—shell scripts, Ansible playbooks, even CloudFormation to some degree—follows a procedural model. You write step-by-step instructions: create this VPC, then attach this subnet, then launch this instance. The system executes your instructions in order, and you hope the end result matches what you intended.

Terraform inverts this model entirely. You describe the desired end state, and Terraform figures out how to get there. This distinction sounds academic until you realize its implications:

Procedural: “Create a security group. Add rule A. Add rule B. Attach to instance X.”

Declarative: “Instance X should exist with a security group containing rules A and B.”

The procedural approach breaks when reality diverges from your script’s assumptions. Someone manually added rule C through the console. Your script runs again, blissfully unaware, and now you have duplicate rules or cryptic errors. The declarative approach handles this gracefully—Terraform compares desired state against actual state and computes the minimal set of changes required.

The Infrastructure Lifecycle

Terraform’s workflow centers on three operations: plan, apply, and destroy. This isn’t just a convenience feature; it’s the foundation of safe infrastructure management.

The plan phase computes a diff between your configuration and real-world infrastructure without modifying anything. This preview catches destructive changes before they happen—that innocent-looking rename that would actually destroy and recreate your production database.

The apply phase executes the plan, creating a transaction log of every change. The destroy phase tears down resources in the correct order, respecting dependencies that would cause failures if violated.

💡 Pro Tip: Never run terraform apply without reviewing the plan output first, especially in production. The five minutes you save will cost you five hours when an unexpected resource replacement takes down your service.

This lifecycle model is precisely why Terraform requires something procedural scripts don’t: a persistent record of what it has created. That record—the state file—is both Terraform’s greatest strength and, as we’ll explore next, your biggest operational liability.

The State File: Terraform’s Memory (and Your Biggest Liability)

Every Terraform deployment hinges on a single JSON file that most engineers never examine until something breaks catastrophically. The state file is Terraform’s source of truth—a complete record of every resource it manages, their current configurations, and the relationships between them. Without it, Terraform becomes blind to your infrastructure.

Visual: Terraform state file structure and security implications

When you run terraform apply, Terraform doesn’t query your cloud provider to understand what exists. It consults the state file, compares it against your configuration, and calculates the delta. This design choice enables Terraform’s speed and predictability, but it introduces a critical dependency that catches teams off guard.

Anatomy of a State File

A typical state file contains far more information than most engineers realize:

terraform.tfstate (excerpt)
{
"version": 4,
"terraform_version": "1.7.0",
"resources": [
{
"type": "aws_db_instance",
"name": "production",
"instances": [
{
"attributes": {
"password": "super_secret_password_123",
"endpoint": "prod-db.abc123.us-east-1.rds.amazonaws.com",
"username": "admin",
"allocated_storage": 100
}
}
]
}
]
}

Notice the password field stored in plaintext. Every sensitive value your Terraform configuration touches—database credentials, API keys, private certificates—lands in the state file unencrypted. This single file becomes the skeleton key to your entire infrastructure.

The state file also tracks:

  • Resource metadata: Provider-specific IDs, ARNs, and internal references
  • Dependency graphs: Which resources depend on others for creation order
  • Output values: Any sensitive data exposed through output blocks
  • Provider configurations: Including authentication details in some cases

The Local State Trap

By default, Terraform stores state locally in terraform.tfstate. This works perfectly for solo developers experimenting on personal projects. It becomes a liability the moment a second engineer joins.

main.tf
## This configuration uses local state by default
## No backend block = state stored in the current directory
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.micro"
}

Consider what happens when two engineers run terraform apply against the same infrastructure with separate local state files. Each believes they have the authoritative view. Both make changes. The infrastructure diverges from both state files. Resources get orphaned. Worse, one engineer’s apply might destroy resources the other created, because their state file has no record of them existing.

💡 Pro Tip: Treat local state as a development convenience, never a team strategy. The moment you commit Terraform code to a shared repository, you need remote state.

Sensitive Data Exposure

The state file’s plaintext storage of secrets creates three distinct risks:

  1. Version control leaks: Committing terraform.tfstate to Git exposes every secret to anyone with repository access—forever, thanks to Git history
  2. Backup exposure: State files in local backups, developer machines, or CI artifacts become attack vectors
  3. Compliance violations: Storing unencrypted credentials violates most security frameworks (SOC 2, PCI-DSS, HIPAA)

Even with remote backends that encrypt state at rest, anyone with state read permissions sees everything in plaintext when they run terraform state pull. Your state access controls become your security perimeter.

The state file transforms from a helpful bookkeeping mechanism into an operational risk the moment your team scales beyond one person or your infrastructure contains any sensitive data. Understanding this reality is the first step—configuring remote backends with proper access controls is the solution.

Remote State Backends: S3, Azure Blob, and GCS Configuration

The moment you commit a local terraform.tfstate file to Git, you’ve created a ticking time bomb. Two engineers run terraform apply simultaneously, and suddenly your production database exists in a quantum superposition of states—configured and corrupted at the same time. Remote state backends solve this by centralizing state storage and introducing locking mechanisms that prevent concurrent modifications.

The Bootstrap Problem

Remote state backends create an interesting chicken-and-egg scenario: you need infrastructure (an S3 bucket, DynamoDB table) to store state, but Terraform manages infrastructure using state. The solution is a two-phase approach—bootstrap your state infrastructure separately, then configure your main projects to use it.

bootstrap/main.tf
## Bootstrap resources - apply this ONCE with local state
resource "aws_s3_bucket" "terraform_state" {
bucket = "mycompany-terraform-state-prod"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}

Apply this bootstrap configuration once, commit the resulting state file to a secure location (or accept the one-time local state), then never touch it again. This infrastructure becomes the foundation for all other Terraform projects in your organization. Some teams maintain this bootstrap configuration in a separate repository with restricted access, treating it as critical infrastructure that rarely changes.

Configuring the S3 Backend

With bootstrap infrastructure in place, configure your main projects to use remote state:

terraform.tf
terraform {
backend "s3" {
bucket = "mycompany-terraform-state-prod"
key = "services/api-gateway/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-locks"
}
}

The key parameter defines the path within your bucket—use a consistent hierarchy like {environment}/{service}/terraform.tfstate to keep state files organized and discoverable. This naming convention becomes invaluable when you’re debugging production issues at 2 AM and need to quickly locate the correct state file among dozens of services.

💡 Pro Tip: Backend configuration doesn’t support variables or interpolation. If you need environment-specific backends, use partial configuration with -backend-config flags during terraform init, or maintain separate backend files per environment.

For teams managing multiple environments, partial backend configuration keeps your code DRY while allowing environment-specific values:

Terminal window
terraform init -backend-config="environments/prod.backend.hcl"

State Locking in Action

DynamoDB locking transforms Terraform from a single-user tool into a team-safe system. When an engineer runs terraform plan or apply, Terraform acquires a lock by writing a record to DynamoDB:

{
"LockID": "mycompany-terraform-state-prod/services/api-gateway/terraform.tfstate",
"Info": "{\"ID\":\"abc123\",\"Operation\":\"OperationTypeApply\",\"Who\":\"engineer@laptop\",\"Created\":\"2024-01-15T10:30:00Z\"}"
}

Any concurrent operation sees this lock and fails immediately with a clear message identifying who holds the lock and when they acquired it. This prevents the state corruption that occurs when two processes read the same state, make different changes, and race to write their versions back.

If a Terraform operation crashes or is interrupted, locks can become orphaned. Use terraform force-unlock <LOCK_ID> with extreme caution—only after confirming no other operations are genuinely running. Force-unlocking while another process is active leads to the exact corruption you’re trying to prevent.

Cross-Cloud Considerations

Azure Blob Storage and Google Cloud Storage follow similar patterns. Azure uses blob leasing for locking (no separate table required), while GCS relies on object versioning and generation numbers:

azure-backend.tf
terraform {
backend "azurerm" {
resource_group_name = "terraform-state-rg"
storage_account_name = "tfstatemycompany"
container_name = "tfstate"
key = "prod/api-gateway.tfstate"
}
}
gcs-backend.tf
terraform {
backend "gcs" {
bucket = "mycompany-terraform-state"
prefix = "prod/api-gateway"
}
}

GCS automatically handles locking through object generation numbers, making setup simpler than AWS. However, you must still enable object versioning on the bucket for recovery purposes.

Regardless of cloud provider, enable versioning on your state bucket. When state corruption does occur—and it will—versioning lets you recover by rolling back to a known-good state file rather than reconstructing your infrastructure mapping from scratch. Consider implementing lifecycle policies that retain state versions for at least 90 days, balancing storage costs against recovery flexibility.

The combination of remote storage, encryption, and locking establishes the foundation for team-based Terraform workflows. But what happens when your state drifts from reality, or you need to adopt existing infrastructure into Terraform management? That’s where state surgery becomes essential.

State Surgery: Import, Move, and Remove Without Destruction

Every Terraform practitioner eventually faces the same challenge: existing infrastructure that wasn’t created through Terraform, or code that needs restructuring without destroying production resources. State manipulation commands are your surgical tools for these scenarios—powerful when used correctly, catastrophic when misused.

Importing Existing Infrastructure

When inheriting infrastructure or migrating from manual provisioning, terraform import bridges the gap between what exists and what Terraform knows about. The command maps a real resource to a Terraform resource address.

import-existing-s3.sh
## First, write the resource block in your configuration
## Then import the existing resource into state
terraform import aws_s3_bucket.legacy_data my-company-legacy-bucket
## For resources with complex IDs, check provider documentation
terraform import aws_db_instance.production arn:aws:rds:us-east-1:123456789:db:prod-mysql
## Import into a module
terraform import module.networking.aws_vpc.main vpc-0a1b2c3d4e5f6g7h8

After importing, run terraform plan immediately. The output reveals configuration drift—every attribute where your written configuration differs from the imported state. Treat this as a checklist: update your configuration until the plan shows no changes.

💡 Pro Tip: Generate configuration stubs with terraform plan -generate-config-out=generated.tf when importing resources. This gives you a starting point that matches the actual resource attributes.

Moving Resources Without Recreation

Refactoring Terraform code often means renaming resources, moving them into modules, or restructuring your project layout. Without state manipulation, Terraform interprets these changes as “destroy old, create new”—unacceptable for production databases or stateful services.

refactor-state.sh
## Rename a resource
terraform state mv aws_instance.web aws_instance.application_server
## Move a resource into a module
terraform state mv aws_rds_cluster.main module.database.aws_rds_cluster.main
## Move an entire module
terraform state mv module.old_name module.new_name
## Move resources between state files (advanced)
terraform state mv -state=old.tfstate -state-out=new.tfstate \
aws_lambda_function.api module.api.aws_lambda_function.main

The state mv command updates only the state file’s internal mappings. The actual infrastructure remains untouched. Terraform’s next plan recognizes that the resource at its new address matches the existing infrastructure.

Always run these commands against a local copy of state first when using remote backends. Pull the state, perform surgery, then push it back:

safe-state-surgery.sh
terraform state pull > backup.tfstate
## Perform your state operations
terraform state push backup.tfstate # Only if using local state for testing

The Remove Escape Hatch

Sometimes you need Terraform to forget a resource without destroying it—when transferring ownership to another team’s state file, when a resource will be managed manually, or when cleaning up after a failed import.

remove-from-state.sh
## Remove a single resource (infrastructure remains, Terraform forgets it)
terraform state rm aws_iam_role.legacy_service
## Remove all resources in a module
terraform state rm module.deprecated_service
## Verify the removal
terraform state list | grep legacy_service # Should return nothing

After removal, that resource becomes invisible to Terraform. Running terraform destroy won’t touch it. This is the correct approach when decommissioning Terraform management of specific resources without decommissioning the resources themselves.

💡 Pro Tip: Before any state surgery, create a versioned backup. S3 bucket versioning on your state backend isn’t optional—it’s your recovery mechanism when state manipulation goes wrong.

State manipulation commands demand respect. They bypass Terraform’s normal planning and approval workflow, making direct modifications to your source of truth. Establish a team policy: all state surgery requires peer review, documented justification, and verified backups.

With import, move, and remove mastered, you’re equipped to handle most state emergencies. But prevention beats surgery—proper workspace isolation patterns reduce how often you need these tools in the first place.

Workspaces and State Isolation Patterns

Terraform workspaces appear deceptively simple: run terraform workspace new staging and you have isolated state. This simplicity masks a fundamental architectural decision that will shape your team’s workflow for years. Choose wrong, and you’ll spend months untangling the mess.

When Workspaces Work

Workspaces excel in one scenario: identical infrastructure across environments with only variable differences. A typical use case is deploying the same application stack to dev, staging, and production where the only changes are instance sizes, replica counts, and domain names.

environments.tf
locals {
env_config = {
dev = {
instance_type = "t3.small"
min_replicas = 1
domain = "dev.example.com"
}
staging = {
instance_type = "t3.medium"
min_replicas = 2
domain = "staging.example.com"
}
prod = {
instance_type = "t3.large"
min_replicas = 3
domain = "example.com"
}
}
config = local.env_config[terraform.workspace]
}
resource "aws_instance" "app" {
instance_type = local.config.instance_type
# ... remaining configuration
}

This pattern works because the infrastructure topology remains constant. Every environment gets the same resources with different parameters. Teams managing ephemeral environments—feature branches or short-lived testing clusters—also benefit from workspaces since they minimize configuration overhead while maintaining state isolation.

The Workspace Trap

Workspaces become a trap when environments diverge structurally. Production needs a read replica, staging doesn’t. Dev requires a VPN connection for debugging, production uses a bastion host. These conditional resources create brittle configurations riddled with count = terraform.workspace == "prod" ? 1 : 0 expressions.

The deeper problem: workspaces share the same codebase at the same commit. Promoting changes from dev to prod means running the identical code, not deploying a tested artifact. There’s no mechanism for staging to run yesterday’s code while dev tests today’s changes. This constraint becomes particularly painful during incident response when you need to roll back production while preserving fixes in lower environments.

Directory-Based Isolation

For teams managing divergent environments or requiring independent deployment cycles, directory-based isolation provides explicit separation:

infrastructure/
├── modules/
│ ├── networking/
│ ├── compute/
│ └── database/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── backend.tf
│ │ └── terraform.tfvars
│ ├── staging/
│ └── prod/

Each environment maintains its own state configuration, version pins, and module references. Promoting changes means updating module versions in each environment’s configuration—a deliberate, auditable process that integrates naturally with pull request workflows and change management procedures.

environments/prod/main.tf
module "networking" {
source = "../../modules/networking"
version = "2.1.0" # Pinned, promoted from staging
environment = "prod"
cidr_block = "10.0.0.0/16"
}

The tradeoff is real: directory isolation introduces duplication. Backend configurations, provider versions, and boilerplate appear in each environment. Combat this with shared .tf files symlinked across environments or templating tools that generate environment scaffolding. Some teams adopt Terragrunt specifically to reduce this repetition while preserving isolation benefits.

💡 Pro Tip: Directory isolation adds overhead but provides blast radius containment. A malformed terraform apply in dev cannot accidentally destroy production resources—the state files are physically separate.

Choosing Your Strategy

Teams under five engineers managing simple, symmetric environments benefit from workspace simplicity. Larger teams, regulated industries, or architectures with structural differences between environments need directory isolation’s explicit boundaries. Compliance requirements often mandate this separation—auditors appreciate the clear delineation between environment configurations.

The decision framework: if you find yourself writing conditionals based on terraform.workspace more than twice, directory isolation will save you pain. The upfront duplication cost is lower than debugging workspace-conditional spaghetti at 2 AM. Consider also your CI/CD pipeline complexity—workspaces require careful orchestration to prevent concurrent operations, while directory isolation naturally parallelizes across environments.

With your isolation strategy established, the next challenge emerges: what happens when your carefully managed state drifts from reality?

Drift Detection and State Recovery Strategies

State drift is inevitable. Someone SSH’d into a server and changed a security group. A colleague “fixed” something through the AWS console during an incident. A third-party tool modified a resource Terraform thinks it owns. The gap between your state file and reality grows silently until your next terraform apply produces unexpected results—or worse, destroys resources you needed.

The teams that survive Terraform at scale aren’t the ones who prevent all drift. They’re the ones who detect it early and recover from it gracefully.

Detecting Drift Before It Detects You

The simplest drift detection mechanism is built into Terraform itself. Running terraform plan compares your state file against both your configuration and the actual infrastructure:

Terminal window
terraform plan -refresh-only -out=drift-check.tfplan

The -refresh-only flag tells Terraform to update the state file to match reality without proposing configuration changes. This isolates drift detection from your normal change workflow.

For automated detection, integrate this into your CI/CD pipeline. Here’s a GitHub Actions workflow that runs nightly drift checks:

.github/workflows/drift-detection.yml
name: Terraform Drift Detection
on:
schedule:
- cron: '0 6 * * *' # Daily at 6 AM UTC
workflow_dispatch:
jobs:
detect-drift:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: terraform init -backend-config="env/production.hcl"
- name: Check for Drift
id: drift
run: |
terraform plan -refresh-only -detailed-exitcode -out=drift.tfplan 2>&1 | tee drift-output.txt
echo "exitcode=$?" >> $GITHUB_OUTPUT
continue-on-error: true
- name: Alert on Drift
if: steps.drift.outputs.exitcode == '2'
run: |
echo "::warning::Infrastructure drift detected!"
# Send to Slack, PagerDuty, or your alerting system

The -detailed-exitcode flag returns exit code 2 when drift exists, making it easy to trigger alerts without parsing output.

💡 Pro Tip: Run drift detection after business hours in your primary region. This catches changes made during the workday before they compound overnight.

Recovering from State Corruption

When state corruption occurs—and it will—your recovery options depend entirely on what you prepared beforehand.

S3 backend versioning is your first line of defense. With versioning enabled, every state write creates a recoverable snapshot:

Terminal window
## List recent state versions
aws s3api list-object-versions --bucket my-terraform-state --prefix prod/terraform.tfstate
## Restore a previous version
aws s3api get-object --bucket my-terraform-state --key prod/terraform.tfstate \
--version-id "abc123previousversion" terraform.tfstate.recovered

For more surgical recovery, terraform state pull and terraform state push let you edit state directly. This is dangerous but sometimes necessary:

Terminal window
## Export current state
terraform state pull > state-backup.json
## Edit the JSON (remove corrupted resource, fix malformed entries)
## Then push the corrected state
terraform state push state-backup.json

When all else fails, you can rebuild state from scratch using terraform import for each resource. This is painful but deterministic—import commands are documented and repeatable.

Building Resilient Recovery Processes

Document your recovery procedures before you need them. Create runbooks that specify who can push state changes, what approvals are required, and how to validate recovery success. Test these procedures quarterly by restoring state to a sandbox environment.

The investment in drift detection and recovery automation pays dividends when your next incident involves infrastructure changes made outside Terraform. Instead of a multi-hour forensic investigation, you’ll have a clear picture of what changed and a tested path back to consistency.

With drift under control, the remaining challenge is coordinating the humans who write and review Terraform changes across your team.

Team Workflows: From PR to Production

Infrastructure changes deserve the same rigor as application code. The difference: a typo in application code crashes a service, while a typo in Terraform can delete a database. Team workflows must account for this asymmetry.

Making Infrastructure Changes Reviewable

Raw terraform plan output is notoriously difficult to review in pull requests. It mixes noise with signal, burying the critical “will destroy” line among dozens of unchanged attributes. Effective teams solve this by posting formatted plan output directly to pull requests, where reviewers can see exactly what will change before approving.

The workflow follows a predictable pattern: engineer opens PR, automation runs terraform plan, plan output appears as a PR comment, reviewers approve based on the actual infrastructure diff, and merge triggers terraform apply. This creates an audit trail linking every infrastructure change to a specific PR, review, and approval.

Collaborative Workflow Platforms

Three platforms dominate this space, each with distinct trade-offs:

Atlantis runs as a self-hosted application that listens to GitHub/GitLab webhooks. It executes plans and applies in response to PR comments like atlantis plan and atlantis apply. Teams control their own infrastructure and secrets, but own the operational burden of running Atlantis itself.

HCP Terraform (formerly Terraform Cloud) provides HashiCorp’s managed solution. It handles remote execution, state management, and team access controls in a single platform. The integration with HashiCorp’s ecosystem is seamless, particularly for teams already using Vault or Consul.

Spacelift offers a more opinionated approach with native drift detection, stack dependencies, and approval policies. It excels at managing complex multi-stack environments where changes cascade across infrastructure layers.

Policy as Code: The Governance Layer

Automated workflows require automated guardrails. Policy as code enforces organizational standards before infrastructure deploys, not after an incident.

Sentinel integrates natively with HCP Terraform, evaluating policies between plan and apply. Teams define rules like “no public S3 buckets” or “all instances must have specific tags” that block non-compliant changes automatically.

Open Policy Agent (OPA) provides a vendor-neutral alternative. OPA evaluates Terraform plans against Rego policies, integrating with any CI/CD system. The learning curve is steeper, but the flexibility suits multi-tool environments.

💡 Pro Tip: Start with soft enforcement that warns on policy violations rather than blocking them. This builds team familiarity before policies become mandatory gates.

These workflows transform infrastructure management from a high-trust, high-risk activity into a collaborative, auditable process. The combination of remote state backends, proper isolation patterns, drift detection, and team workflows creates a foundation that scales with your organization’s growth and complexity.

Key Takeaways

  • Configure remote state with locking before your second team member runs terraform apply—retrofitting is painful
  • Treat state files as production data: encrypt at rest, version with S3 versioning, and never commit to git
  • Use terraform plan output in CI to make infrastructure PRs reviewable—your future self debugging at 2am will thank you
  • Master terraform state mv and import before you need them; these commands prevent the ‘destroy and recreate’ panic reflex