Base64 is encoding. Not encryption. But most Kubernetes tutorials treat it like security.
If you've ever stored a database password in a Kubernetes Secret manifest, you've probably felt a little uneasy about it. You should. Because that "secret" is one base64 --decode command away from being readable by anyone with cluster access.
I learned this the hard way while building a production-grade retail store application on EKS. The solution involved three AWS services working together - and the result is a secrets management setup that your security team will actually approve.
Here's the complete journey from "please don't" to "production-ready."
The Problem: Base64 Is Not Security
Let's start with what most tutorials teach you. Here's a standard Kubernetes Secret manifest:
apiVersion: v1
kind: Secret
metadata:
name: catalog-db
data:
MYSQL_USER: "Y2F0YWxvZ191c2Vy"
MYSQL_PASSWORD: "TXlEQl9wQHNzMTIz"Looks encrypted, right? It's not. Run this command:
echo "TXlEQl9wQHNzMTIz" | base64 --decodeOutput: MyDB_p@ss123
That's a real database password. Decoded in one second. By anyone.
Here's what's actually happening with native Kubernetes Secrets:
Secrets are stored in etcd (the cluster's key-value store)
The "encoding" is Base64 - a reversible transformation, not encryption
Anyone with
kubectl get secret -o yamlaccess can see every secretThere's no built-in audit trail (who accessed which secret, when?)
Rotation means editing YAML and redeploying - manually, every time
Secrets are scattered across manifests in your Git repository
For learning and development? Native Kubernetes Secrets are fine. For production workloads handling real customer data? We need something significantly bett

The Production Solution: Three Components Working Together
The fix isn't a single tool - it's three AWS services that integrate with Kubernetes to create a proper secrets pipeline:
1. AWS Secrets Manager - The vault. Secrets live here, encrypted, access-controlled, and auditable.
2. EKS Pod Identity Agent - The authenticator. Pods prove who they are via IAM - no static credentials anywhere.
3. Secrets Store CSI Driver + ASCP - The delivery mechanism. Secrets from AWS get mounted as files inside your pods.
Here's the architecture:
AWS Secrets Manager (encrypted, audited)
↓ (IAM authentication via Pod Identity)
EKS Cluster
├── Pod Identity Agent (DaemonSet on every node)
├── Secrets Store CSI Driver (DaemonSet)
└── AWS Secrets Provider / ASCP (DaemonSet)
↓
Pod Volume Mount
└── /mnt/secrets-store/
├── MYSQL_USER (file)
└── MYSQL_PASSWORD (file)No secrets in Kubernetes manifests. No secrets in etcd. No Base64 encoding anywhere. Your application reads credentials from a file path - just like reading any other configuration file.

Step 1: Store Secrets in AWS Secrets Manager
First, we move the source of truth outside the cluster entirely. AWS Secrets Manager stores your credentials encrypted with KMS, with full CloudTrail logging of every access.
aws secretsmanager create-secret \
--name catalog-db-secret-1 \
--description "MySQL credentials for Catalog microservice" \
--secret-string '{
"MYSQL_USER": "mydbadmin",
"MYSQL_PASSWORD": "MyDB_p@ss123"
}'What you get for free: Encryption at rest, automatic rotation support, fine-grained IAM access control, and a complete audit trail via CloudTrail. Every time any pod reads this secret, it's logged.
Step 2: EKS Pod Identity - IAM Authentication Without Static Keys
This is the part that replaced the older IRSA (IAM Roles for Service Accounts) approach. EKS Pod Identity is simpler, more secure, and requires no OIDC provider configuration.
How it works:
You create an IAM Role that trusts pods.eks.amazonaws.com
You create a Pod Identity Association linking a Kubernetes ServiceAccount to that IAM Role
When a pod using that ServiceAccount starts, the EKS Pod Identity Webhook automatically injects credential environment variables
The Pod Identity Agent (a DaemonSet running on every node) exchanges the pod's projected token for temporary AWS credentials
Your application's AWS SDK uses these credentials transparently
The IAM policy scoped to only your specific secret:
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:ACCOUNT_ID:secret:catalog-db-secret*"
}Why this matters: The IAM policy grants access only to catalog-db-secret* - not all secrets in your account. This is least-privilege access at the secret level. If a pod is compromised, the attacker can only read the secrets that specific ServiceAccount is authorized for. Nothing else.
Creating the Pod Identity Association:
aws eks create-pod-identity-association \
--cluster-name retail-dev-eksdemo1 \
--namespace default \
--service-account catalog-mysql-sa \
--role-arn arn:aws:iam::ACCOUNT_ID:role/catalog-db-secrets-roleThat's it. No OIDC provider to configure. No annotation gymnastics. One CLI command binds a Kubernetes ServiceAccount to an IAM Role.
Step 3: CSI Driver + ASCP - Mounting Secrets as Files
The SecretProviderClass - the bridge between AWS and Kubernetes:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: catalog-db-secrets
spec:
provider: aws
parameters:
objects: |
- objectName: "catalog-db-secret-1"
objectType: "secretsmanager"
jmesPath:
- path: "MYSQL_USER"
objectAlias: "MYSQL_USER"
- path: "MYSQL_PASSWORD"
objectAlias: "MYSQL_PASSWORD"
usePodIdentity: "true"This tells the CSI Driver: "Fetch catalog-db-secret-1 from AWS Secrets Manager, extract the MYSQL_USER and MYSQL_PASSWORD JSON fields, and mount them as separate files
The Before and After: Side by Side
BEFORE - Native Kubernetes Secrets:
Secret manifest with Base64 values in YAML
MySQL reads via secretKeyRef environment variables
Credentials stored in etcd
No audit trail
Manual rotation requires redeployment
AFTER - AWS Secrets Manager + CSI Driver:
No Secret manifest at all
MySQL reads from /mnt/secrets-store/MYSQL_USER
Credentials sourced from AWS at pod startup
Never stored in etcd
CloudTrail logs every access
Rotate in AWS - pods pick up new values
The application code doesn't change at all. It still reads environment variables. The only difference is where those variables come from.
The Cost Question
$0.40 per secret per month (storage)
$0.05 per 10,000 API calls (access)
For a typical app with 5-10 secrets: ~$5/month total.
The average cost of a data breach? $4.45 million (IBM 2023 report).
Five dollars a month for production-grade secrets management is the best infrastructure insurance you'll ever buy.
The Progression in the Course
Section 9 teaches this progressively across four demos:
Demo 1 (09-01): Native Kubernetes Secrets - understand the baseline
Demo 2 (09-02): EKS Pod Identity Agent - IAM auth without static credentials
Demo 3 (09-03): CSI Driver + ASCP Setup - install the infrastructure
Demo 4 (09-04): Production Integration - put it all together
By the end, you'll have a working retail store application where no credentials exist inside the Kubernetes cluster.
Explore the Course GitHub Repo
GitHub Repo: devops-real-world-project-implementation-on-aws
Star the repo if you find it useful!
Join 3,500+ Students Building Production-Ready Skills
Where are your secrets stored right now? Hit reply and tell me - I read every response. And if you're still using Base64-encoded YAML files in production, this is your sign to make the switch.
Kalyan Reddy Daida | StackSimplify