Hero image for Building Hard Multi-Tenant Boundaries in Kubernetes with Istio Service Mesh

Building Hard Multi-Tenant Boundaries in Kubernetes with Istio Service Mesh


Your namespaces are not as isolated as you think. Last month, a misconfigured NetworkPolicy let tenant A’s debug pod curl tenant B’s internal API—and nobody noticed until the security audit. The pod had been running for three weeks. In that time, a junior developer troubleshooting connection issues had inadvertently mapped out half of another customer’s service topology through trial-and-error requests that should have been blocked.

This is the uncomfortable truth about Kubernetes multi-tenancy: namespaces provide logical separation, not security boundaries. They’re administrative conveniences—a way to organize resources and apply RBAC rules—but the network doesn’t care about your namespace labels. By default, any pod can talk to any other pod across the entire cluster. Your carefully named tenant-acme and tenant-globex namespaces are sharing the same flat network, the same DNS resolution, and often the same node pools.

Most teams discover this gap reactively. A penetration test reveals cross-namespace access. A compliance audit flags missing network segmentation. Or worse, a production incident exposes customer data because someone assumed namespace boundaries were firewall boundaries.

The fix isn’t abandoning multi-tenancy on shared clusters—the economics rarely justify dedicated clusters per tenant. Instead, you need defense in depth: network policies that default-deny cross-namespace traffic, cryptographic identity verification between services, and resource quotas that prevent noisy-neighbor attacks. Istio’s service mesh provides all three layers, but only if you configure them correctly.

The difference between “namespace-per-tenant” and “hard multi-tenant isolation” comes down to whether you’re relying on convention or enforcement. Let’s start by examining exactly where namespace isolation fails—and why NetworkPolicies alone aren’t enough to save you.

The Namespace Isolation Illusion

When platform teams first design multi-tenant Kubernetes clusters, namespaces appear to solve the isolation problem elegantly. Each tenant gets their own namespace, resource quotas keep consumption in check, and RBAC policies restrict who can access what. The architecture diagrams look clean, the kubectl commands work as expected, and everyone moves on.

Visual: namespace isolation boundaries

This is the namespace isolation illusion—the assumption that administrative boundaries automatically create security boundaries. They don’t.

What Namespaces Actually Provide

Namespaces offer organizational separation: distinct scopes for resource naming, targets for RBAC policies, and units for quota enforcement. A developer in tenant-a’s namespace cannot kubectl exec into tenant-b’s pods, assuming RBAC is configured correctly.

But namespaces do nothing to prevent tenant-a’s application code from making HTTP requests to tenant-b’s services. By default, every pod in a Kubernetes cluster can communicate with every other pod across all namespaces. The flat network model that makes Kubernetes networking simple also makes it fundamentally insecure for multi-tenancy.

Common Misconfigurations That Break Isolation

Even teams aware of network boundaries frequently introduce gaps:

Missing NetworkPolicies on shared services. A logging aggregator or metrics collector deployed in a platform namespace accepts connections from all tenants. One tenant’s compromised workload can now poison logs, exfiltrate data through metric labels, or exploit vulnerabilities in shared components.

Service account token exposure. Default service account tokens mounted into pods can be used to query the Kubernetes API. Without proper RBAC, a tenant discovers other namespaces exist, enumerates services, or finds cluster-wide secrets.

DNS as a reconnaissance tool. Kubernetes DNS allows any pod to resolve any service name cluster-wide. Tenants can map your entire service topology without sending a single packet to those services.

Sidecar-free workloads in mesh environments. When some pods bypass the service mesh, they operate outside your identity and authorization model entirely.

The Three Layers of Multi-Tenant Isolation

A defensible multi-tenant architecture requires isolation at three distinct layers:

Network isolation controls which workloads can establish connections to which endpoints. This is your first line of defense, implemented through NetworkPolicies or service mesh traffic rules.

Identity isolation ensures that when connections are allowed, both parties cryptographically prove who they are. Mutual TLS between services makes identity spoofing impossible.

Resource isolation prevents tenants from impacting each other through resource exhaustion—CPU, memory, storage, and API server request rates.

Namespaces support all three layers, but they implement none of them automatically. You need additional tooling to transform namespace boundaries into actual security boundaries.

Network policies provide the foundation, but Istio’s service mesh capabilities take isolation further by adding cryptographic identity verification to every connection. Let’s start with the network layer.

Network-Level Isolation with Calico Policies

Kubernetes namespaces provide a logical boundary for organizing workloads, but they do nothing to prevent network traffic from flowing freely between them. By default, any pod can communicate with any other pod in the cluster, regardless of namespace. For multi-tenant environments, this creates an unacceptable security posture where a compromised workload in one tenant’s namespace can probe, attack, or exfiltrate data from another tenant’s services.

Calico network policies establish the foundational layer of tenant isolation by controlling traffic at the network level. Before implementing service mesh policies, you need this baseline to prevent lateral movement between tenant boundaries.

Implementing Default-Deny Policies

The first step in securing tenant namespaces is establishing a default-deny posture for both ingress and egress traffic. This ensures that no communication occurs unless explicitly permitted.

default-deny-all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: tenant-acme
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

Apply this policy to every tenant namespace. The empty podSelector matches all pods within the namespace, effectively blocking all traffic in both directions. With this in place, workloads cannot communicate with anything—including DNS, the Kubernetes API, or other pods within the same namespace.

💡 Pro Tip: Apply default-deny policies immediately after creating a tenant namespace, before deploying any workloads. This prevents accidental exposure during the deployment window.

Allowing Essential Cluster Services

After locking down the namespace, selectively permit traffic to essential cluster services. Most applications require DNS resolution and communication with the Kubernetes API server.

allow-essential-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-and-api
namespace: tenant-acme
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- ipBlock:
cidr: 10.96.0.1/32
ports:
- protocol: TCP
port: 443

This policy allows DNS queries to CoreDNS and HTTPS traffic to the Kubernetes API server. Adjust the API server IP (10.96.0.1) to match your cluster’s configuration.

Permitting Intra-Tenant Communication

Pods within the same tenant namespace typically need to communicate with each other. Create a policy that allows traffic only when both source and destination reside in the same namespace.

allow-same-namespace.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: tenant-acme
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}

This configuration permits any pod in tenant-acme to communicate with any other pod in tenant-acme, while cross-namespace traffic remains blocked by the default-deny policy.

Blocking Cross-Tenant Traffic Explicitly

For defense in depth, add an explicit policy that denies traffic from other tenant namespaces. While the default-deny handles this, explicit deny policies survive accidental default-deny removal and provide clear documentation of your security intent.

deny-cross-tenant.yaml
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: deny-cross-tenant-traffic
spec:
selector: tenant in {'acme', 'globex', 'initech'}
types:
- Ingress
- Egress
ingress:
- action: Deny
source:
selector: tenant in {'acme', 'globex', 'initech'}
notSelector: tenant == "${tenant}"
egress:
- action: Deny
destination:
selector: tenant in {'acme', 'globex', 'initech'}
notSelector: tenant == "${tenant}"

Label your tenant namespaces with tenant: <tenant-name> for this policy to function correctly.

Network policies provide strong isolation at layers 3 and 4, but they cannot inspect application-layer traffic or verify workload identity. A compromised pod with valid network access can still impersonate legitimate services. This is where Istio’s mutual TLS enforcement adds the next layer of defense.

Identity-Based Isolation with Istio mTLS

Namespace boundaries and network policies operate at Layer 3 and 4—they know about IP addresses and ports, but nothing about who is actually making a request. A compromised pod with a valid IP can still communicate with any service its network policy permits. Istio’s mutual TLS (mTLS) adds cryptographic identity to every workload, creating an isolation layer that attackers cannot spoof without compromising the mesh’s certificate authority.

How Istio Assigns Workload Identity

When you deploy a pod in an Istio-enabled namespace, the sidecar proxy (Envoy) requests a certificate from the Istio control plane (istiod). This certificate encodes the workload’s identity using the SPIFFE (Secure Production Identity Framework for Everyone) standard:

spiffe://cluster.local/ns/tenant-alpha/sa/order-service

This identity string contains three critical pieces of information:

  • Trust domain: cluster.local (your cluster’s root of trust)
  • Namespace: tenant-alpha (the Kubernetes namespace)
  • Service account: order-service (the workload’s Kubernetes service account)

Every service-to-service call now carries cryptographic proof of the caller’s identity. The receiving service validates this certificate against the mesh’s trust anchor before accepting any traffic. No valid certificate, no connection—regardless of network access.

Enforcing Strict mTLS Cluster-Wide

By default, Istio operates in permissive mode, accepting both plaintext and mTLS traffic to ease migration. For hard multi-tenant isolation, you need strict mode:

strict-mtls.yaml
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT

Applying this policy in the istio-system namespace enforces mTLS mesh-wide. Every connection between sidecars now requires valid certificates. Services outside the mesh—or attackers attempting to inject traffic—receive connection resets.

For tenant-specific enforcement during migration, apply the policy at the namespace level:

tenant-alpha-mtls.yaml
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: tenant-alpha
spec:
mtls:
mode: STRICT

💡 Pro Tip: Use istioctl analyze to detect misconfigurations before they cause outages. It catches common issues like services not included in the mesh attempting to call mTLS-only endpoints.

Why SPIFFE Identity Matters for Multi-Tenancy

The namespace component in SPIFFE identities gives you tenant attribution at the cryptographic layer. When Tenant Alpha’s order-service calls Tenant Beta’s inventory-service, the receiving proxy sees exactly which namespace originated the request—not just an IP address that could belong to anyone.

This identity becomes the foundation for authorization policies. Instead of maintaining IP allowlists that change with every pod restart, you write policies against stable identities:

deny-cross-tenant.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: deny-cross-tenant
namespace: tenant-beta
spec:
action: DENY
rules:
- from:
- source:
notNamespaces:
- tenant-beta
- shared-services

This policy denies any request to tenant-beta that doesn’t originate from tenant-beta itself or the shared-services namespace. The enforcement happens at the sidecar level, using cryptographically verified identity—not easily-spoofed network metadata.

Certificate rotation happens automatically every 24 hours by default, limiting the blast radius of any key compromise. The short-lived credentials mean that even if an attacker extracts a certificate from a compromised pod, it becomes worthless within hours.

With cryptographic identity in place, you have the building blocks for fine-grained access control. Network policies answer “can these pods communicate?” while mTLS answers “who is this service, provably?” The next section explores how AuthorizationPolicies combine these identities with request-level attributes to implement precise tenant isolation rules.

Fine-Grained Access Control with AuthorizationPolicies

Network policies and mTLS establish strong identity foundations, but they operate at the transport layer. A compromised service with valid credentials can still probe endpoints, enumerate APIs, and attempt lateral movement within its network segment. Istio’s AuthorizationPolicies add application-layer enforcement that validates not just who is connecting, but what they’re allowed to do.

Deny-by-Default at the Mesh Level

The first rule of multi-tenant authorization: explicit denies beat implicit allows. Start by rejecting all traffic that lacks explicit permission.

mesh-deny-all.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: istio-system
spec:
{}

This empty policy in istio-system applies mesh-wide. With no rules defined, Istio denies all requests by default. Every service becomes unreachable until you explicitly grant access—a posture that forces intentional security decisions rather than accidental exposure.

The deny-all approach fundamentally changes your security model. Traditional network security operates on “allow unless explicitly denied,” which means new services are exposed by default until someone remembers to lock them down. Deny-by-default inverts this: new deployments remain isolated until operators consciously define their communication requirements. This shift catches misconfigurations before they become vulnerabilities.

💡 Pro Tip: Deploy the deny-all policy during a maintenance window. Existing connections survive, but new requests fail immediately. Have your tenant-specific policies ready before applying the mesh-wide deny.

Writing Tenant-Scoped AuthorizationPolicies

With the baseline deny in place, build allowlists for each tenant’s communication patterns. These policies combine identity assertions from mTLS with request inspection.

tenant-alpha-api-policy.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: api-gateway-policy
namespace: tenant-alpha
spec:
selector:
matchLabels:
app: api-gateway
action: ALLOW
rules:
- from:
- source:
namespaces: ["tenant-alpha"]
principals: ["cluster.local/ns/tenant-alpha/sa/frontend-service"]
to:
- operation:
methods: ["GET", "POST"]
paths: ["/api/v1/orders/*", "/api/v1/products/*"]
- from:
- source:
namespaces: ["tenant-alpha"]
principals: ["cluster.local/ns/tenant-alpha/sa/admin-service"]
to:
- operation:
methods: ["GET", "POST", "DELETE"]
paths: ["/api/v1/*", "/admin/*"]

This policy demonstrates defense in depth. Even if a service in tenant-beta somehow obtains network access to tenant-alpha’s API gateway (through a misconfigured network policy or a node-level compromise), the request fails. The source namespace doesn’t match, the service account principal is wrong, and the authorization layer rejects it before the application processes a single byte.

The principals field deserves special attention. Istio derives these identities from the SPIFFE certificates issued during mTLS handshakes. Unlike IP addresses or DNS names, these identities are cryptographically bound to Kubernetes service accounts. An attacker cannot spoof them without compromising the certificate authority itself—a significantly higher bar than network-level attacks.

Combining Namespace, Service Account, and Path-Based Rules

Real isolation requires composing multiple conditions. Consider a scenario where tenants share a logging infrastructure but need strict boundaries on what they can write.

shared-logging-policy.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: logging-ingestion-policy
namespace: platform-logging
spec:
selector:
matchLabels:
app: log-collector
action: ALLOW
rules:
- from:
- source:
namespaces: ["tenant-alpha"]
to:
- operation:
methods: ["POST"]
paths: ["/ingest/tenant-alpha/*"]
when:
- key: request.headers[x-tenant-id]
values: ["alpha"]
- from:
- source:
namespaces: ["tenant-beta"]
to:
- operation:
methods: ["POST"]
paths: ["/ingest/tenant-beta/*"]
when:
- key: request.headers[x-tenant-id]
values: ["beta"]

The when conditions add header validation. Services must present matching tenant identifiers that correspond to their source namespace. An attacker who compromises tenant-alpha cannot write logs to tenant-beta’s path, even by spoofing headers—the namespace check fails first.

This layered validation creates multiple independent checkpoints. The request must pass namespace verification, path matching, and header inspection. Each layer operates independently, so bypassing one still leaves others intact. This redundancy matters because security failures rarely happen in isolation—they cascade through single points of weakness.

For audit-sensitive environments, add request logging through a second policy:

audit-logging-policy.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: audit-all-requests
namespace: tenant-alpha
spec:
selector:
matchLabels:
app: payment-service
action: AUDIT
rules:
- to:
- operation:
paths: ["/transactions/*"]

The AUDIT action doesn’t block requests but generates detailed access logs for compliance review. Combined with ALLOW policies, you maintain both security enforcement and forensic capability. These audit trails prove invaluable during incident response—they show exactly which principals accessed sensitive endpoints, when, and from where.

AuthorizationPolicies survive network misconfigurations because they validate at the application protocol level. A bypassed Calico rule still hits the Envoy proxy. A spoofed IP address still lacks the correct mTLS identity. The layers compound, making successful attacks require compromising multiple independent systems—network controls, certificate infrastructure, and application-layer policies simultaneously.

With authorization boundaries established, the next challenge emerges: how do you safely expose shared platform services—monitoring, logging, secret management—without creating cross-tenant attack surfaces? The answer lies in careful policy composition for shared infrastructure.

Shared Services Without Shared Risk

Multi-tenant architectures inevitably require shared services. Centralized logging aggregates data from all tenants. Monitoring systems collect metrics across namespaces. A shared ingress gateway routes traffic to the correct tenant workloads. These services must communicate with every tenant—yet this broad access creates the exact attack surface you’ve worked to eliminate.

Visual: shared services architecture

The challenge: grant shared services access to tenant workloads without allowing tenants to abuse that trust relationship. A malicious tenant shouldn’t impersonate the logging service to access another tenant’s data. Similarly, a compromised shared service shouldn’t become a pivot point for lateral movement across tenant boundaries.

Request Principals as Tenant Context

Istio’s mTLS provides cryptographically verified identity through SPIFFE IDs. When a shared service receives a request, it knows exactly which workload sent it. You can leverage this identity to enforce tenant context throughout the request chain, creating an audit trail that follows every operation from origin to completion.

First, ensure your shared services validate the source identity on every request:

shared-logging-authz.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: logging-collector-policy
namespace: shared-services
spec:
selector:
matchLabels:
app: log-collector
rules:
- from:
- source:
namespaces: ["tenant-acme", "tenant-globex", "tenant-initech"]
to:
- operation:
methods: ["POST"]
paths: ["/logs/ingest"]

This policy restricts the log collector to accept requests only from known tenant namespaces. But namespace validation alone isn’t sufficient—you need to track which tenant made each request. Without this tracking, shared services become blind aggregators with no way to enforce tenant-specific access controls on the data they collect.

Propagating Tenant Identity

Shared services must carry tenant context through their operations. Configure Istio to forward the authenticated principal as a header that your services can use for tenant-scoped operations:

envoyfilter-tenant-header.yaml
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: inject-tenant-header
namespace: shared-services
spec:
workloadSelector:
labels:
app: log-collector
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.lua
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inlineCode: |
function envoy_on_request(handle)
local principal = handle:streamInfo():downstreamSslConnection():uriSanPeerCertificate()
handle:headers():add("x-tenant-identity", principal)
end

Your logging service now receives a cryptographically verified x-tenant-identity header. The service uses this header to route logs to tenant-specific indices—Tenant A’s logs never mix with Tenant B’s storage. This same pattern applies to metrics collection, where tenant identity determines which Prometheus instance or label set receives the data.

The Lua filter extracts the identity directly from the TLS certificate, making it impossible for application code to tamper with the tenant context. This separation of concerns means your application developers can focus on business logic while the service mesh handles identity propagation automatically.

Preventing Tenant Impersonation

The critical protection: tenants must never forge shared service identities. This AuthorizationPolicy denies any request where a tenant workload attempts to claim a shared service identity:

deny-impersonation.yaml
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: deny-shared-service-impersonation
namespace: istio-system
spec:
action: DENY
rules:
- from:
- source:
notNamespaces: ["shared-services"]
principals: ["cluster.local/ns/shared-services/sa/*"]
- when:
- key: request.headers[x-tenant-identity]
notValues: [""]
from:
- source:
notNamespaces: ["shared-services"]

This mesh-wide policy blocks two attack vectors: workloads outside shared-services claiming shared service identities, and any non-shared-service workload injecting the x-tenant-identity header. The first rule prevents identity spoofing at the mTLS layer, while the second rule stops header injection attacks that could confuse downstream services.

Consider what happens without these protections. An attacker who compromises a tenant workload could craft requests with forged x-tenant-identity headers, potentially accessing logs or metrics from other tenants. With the denial policy in place, such requests fail before reaching the shared service.

💡 Pro Tip: Apply impersonation-denial policies at the mesh level (istio-system namespace) to ensure they evaluate before namespace-specific policies. Defense in depth means attackers must bypass multiple controls.

The pattern extends to any shared infrastructure: API gateways, secret management services, or tenant provisioning systems. Each shared service accepts connections from multiple tenants while maintaining strict accountability for every operation. When you add new shared services, replicate this pattern—restrict inbound sources, propagate tenant identity, and block impersonation attempts at the mesh layer.

With shared services properly isolated, you have a complete multi-tenant architecture. The final step is validating that your isolation model actually works—before an attacker tests it for you.

Validating Your Isolation Model

Trust but verify. After implementing network policies, mTLS, and authorization policies, you need concrete evidence that your tenant boundaries hold under adversarial conditions. This section covers systematic approaches to validate your isolation model and maintain confidence over time.

Testing with Deliberate Policy Violations

The most reliable way to validate isolation is to actively attempt to break it. Deploy a test workload in one tenant namespace and try to access another tenant’s services.

test-isolation.sh
#!/bin/bash
## Deploy a curl pod in tenant-alpha namespace
kubectl run curl-test --image=curlimages/curl:8.5.0 \
-n tenant-alpha --restart=Never \
--command -- sleep 3600
## Wait for pod readiness
kubectl wait --for=condition=Ready pod/curl-test -n tenant-alpha --timeout=60s
## Attempt cross-tenant access (should fail with RBAC denied)
kubectl exec curl-test -n tenant-alpha -- \
curl -s -o /dev/null -w "%{http_code}" \
http://api-service.tenant-beta.svc.cluster.local:8080/health
## Expected output: 403 (forbidden by AuthorizationPolicy)
## Verify intra-tenant access still works
kubectl exec curl-test -n tenant-alpha -- \
curl -s -o /dev/null -w "%{http_code}" \
http://api-service.tenant-alpha.svc.cluster.local:8080/health
## Expected output: 200
## Cleanup
kubectl delete pod curl-test -n tenant-alpha

Run these tests as part of your CI/CD pipeline after any policy changes. A passing deployment pipeline means nothing if isolation has regressed.

Detecting Policy Drift with istioctl

Configuration drift is the silent killer of security postures. Use istioctl analyze to catch misconfigurations before they reach production.

analyze-policies.sh
## Analyze entire mesh for policy issues
istioctl analyze --all-namespaces
## Check specific tenant namespace
istioctl analyze -n tenant-alpha
## Validate against a staged configuration before applying
istioctl analyze -f new-authorization-policy.yaml
## Export analysis results for compliance reporting
istioctl analyze --all-namespaces -o json > policy-analysis-$(date +%Y%m%d).json

💡 Pro Tip: Integrate istioctl analyze into your GitOps workflow. Block merges to your infrastructure repository when analysis returns warnings or errors.

Continuous Compliance Monitoring

Enable Istio’s access logging to create an audit trail of all service-to-service communication.

enable-audit-logging.sh
## Patch the Istio configmap to enable access logging
kubectl patch configmap istio -n istio-system --type merge -p '{
"data": {
"mesh": "accessLogFile: /dev/stdout\naccessLogEncoding: JSON"
}
}'
## Query for cross-namespace access attempts
kubectl logs -n istio-system -l app=istiod --since=1h | \
jq 'select(.authority | contains(".svc.cluster.local")) |
select(.response_code == 403) |
{source: .source_principal, destination: .authority, code: .response_code}'

Feed these logs into your SIEM or observability platform. Alert on any 403 responses between tenant namespaces—they indicate either misconfiguration or attempted unauthorized access. Both warrant investigation.

Schedule weekly reviews of denied requests grouped by source and destination. Patterns reveal either legitimate integration needs you haven’t accounted for or potential security incidents requiring escalation.

With validation in place, you’ve built a defense-in-depth strategy for multi-tenant isolation that you can demonstrate to auditors and customers alike. The combination of Calico network policies, Istio mTLS, and authorization policies creates overlapping security boundaries that fail safely when any single layer is misconfigured.

Key Takeaways

  • Start with default-deny NetworkPolicies in every tenant namespace before adding any allow rules
  • Enable Istio strict mTLS mode cluster-wide and use AuthorizationPolicies to enforce tenant boundaries at L7
  • Test your isolation by attempting cross-tenant access from within the cluster—if your curl succeeds, your isolation failed
  • Treat shared services as trust boundaries and require explicit tenant identity in every request