Claude Code Plugins

Community-maintained marketplace

Feedback
4
0

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name k8s-security
description Kubernetes and OpenShift security assessment, hardening, and compliance. Use this skill when: (1) Auditing cluster or workload security posture (2) Implementing Pod Security Standards/Admission (3) Configuring RBAC roles and permissions (4) Setting up NetworkPolicies for zero-trust (5) Managing Secrets securely (encryption, external secrets) (6) Scanning images for vulnerabilities (7) Implementing OCP SecurityContextConstraints (8) Compliance checking (CIS benchmarks, SOC2, PCI-DSS) (9) Security incident investigation (10) Hardening cluster components

Kubernetes / OpenShift Security Guide

Command Usage Convention

IMPORTANT: This skill uses kubectl as the primary command in all examples. When working with:

  • OpenShift/ARO clusters: Replace all kubectl commands with oc
  • Standard Kubernetes clusters (AKS, EKS, GKE, etc.): Use kubectl as shown

The agent will automatically detect the cluster type and use the appropriate command.

Comprehensive security assessment, hardening, and compliance for cluster-code managed clusters.

Security Assessment Workflow

  1. Inventory: Identify workloads, namespaces, service accounts
  2. Audit: Run security scans, check configurations
  3. Classify: Risk level based on exposure and sensitivity
  4. Remediate: Apply hardening based on priority
  5. Monitor: Continuous compliance verification

Pod Security Standards (PSS)

Levels

Level Description Use Case
privileged Unrestricted System workloads only
baseline Minimally restrictive Standard workloads
restricted Heavily restricted Security-sensitive workloads

Enforcement Modes

Mode Behavior
enforce Reject violating pods
audit Log violations, allow pods
warn Warn user, allow pods

Namespace Configuration

apiVersion: v1
kind: Namespace
metadata:
  name: ${NAMESPACE}
  labels:
    # Enforce restricted standard
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    # Audit for baseline (catch less severe issues)
    pod-security.kubernetes.io/audit: baseline
    pod-security.kubernetes.io/audit-version: latest
    # Warn for restricted
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: latest

Restricted Profile Requirements

# Pod spec must include ALL of these for restricted compliance:
spec:
  securityContext:
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
  containers:
    - securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        # Optional but recommended:
        readOnlyRootFilesystem: true
        runAsNonRoot: true

RBAC Best Practices

Principle of Least Privilege

# BAD: Overly permissive
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]

# GOOD: Specific permissions
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch"]
    resourceNames: ["app-config"]  # Even more specific

Role vs ClusterRole

Type Scope Use When
Role Namespace App-specific permissions
ClusterRole Cluster-wide Cross-namespace or cluster resources

Common RBAC Patterns

Read-Only Application Access

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: app-reader
  namespace: ${NAMESPACE}
rules:
  - apiGroups: [""]
    resources: ["pods", "services", "configmaps"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "watch"]

CI/CD Deployment Access

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: deployer
  namespace: ${NAMESPACE}
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["configmaps", "secrets"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

Monitoring Access

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: monitoring-reader
rules:
  - apiGroups: [""]
    resources: ["pods", "nodes", "services", "endpoints"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get"]
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list"]

RBAC Audit Commands

# List all roles/clusterroles
kubectl get roles,clusterroles -A

# Check what a service account can do
kubectl auth can-i --list --as=system:serviceaccount:${NS}:${SA}

# Check specific permission
kubectl auth can-i create deployments --as=system:serviceaccount:${NS}:${SA} -n ${NS}

# Find all bindings for a subject
kubectl get rolebindings,clusterrolebindings -A -o json | \
  jq -r '.items[] | select(.subjects[]?.name=="${SA}") | .metadata.name'

# Identify overly permissive roles
kubectl get clusterroles -o json | jq -r \
  '.items[] | select(.rules[].verbs | contains(["*"])) | .metadata.name'

NetworkPolicy Zero-Trust

Default Deny All

# Apply to every namespace for zero-trust baseline
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: ${NAMESPACE}
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

Allow DNS Egress (Required)

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: ${NAMESPACE}
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
        - podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

Allow Ingress from Ingress Controller

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-controller
  namespace: ${NAMESPACE}
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: ${APP_NAME}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx
      ports:
        - protocol: TCP
          port: 8080

Allow Inter-Service Communication

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: ${NAMESPACE}
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app.kubernetes.io/name: frontend
      ports:
        - protocol: TCP
          port: 8080

Secrets Management

Secret Encryption at Rest

# /etc/kubernetes/encryption-config.yaml (on control plane)
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${BASE64_ENCODED_32_BYTE_KEY}
      - identity: {}

External Secrets Operator

# ClusterSecretStore for HashiCorp Vault
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: vault-backend
spec:
  provider:
    vault:
      server: "https://vault.example.com"
      path: "secret"
      version: "v2"
      auth:
        kubernetes:
          mountPath: "kubernetes"
          role: "external-secrets"
          serviceAccountRef:
            name: "external-secrets"
            namespace: "external-secrets"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: app-secrets
  namespace: ${NAMESPACE}
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: app-secrets
    creationPolicy: Owner
  data:
    - secretKey: DATABASE_URL
      remoteRef:
        key: apps/${APP_NAME}
        property: database_url
    - secretKey: API_KEY
      remoteRef:
        key: apps/${APP_NAME}
        property: api_key

Sealed Secrets (GitOps-friendly)

# Install kubeseal CLI
# Create sealed secret
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml

# Apply sealed secret (controller decrypts it)
kubectl apply -f sealed-secret.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: app-secrets
  namespace: ${NAMESPACE}
spec:
  encryptedData:
    DATABASE_URL: AgBy8hCi...  # Encrypted value
    API_KEY: AgA5mKpQ...       # Encrypted value

OpenShift Security Context Constraints

SCC Hierarchy (Most to Least Restrictive)

  1. restricted-v2 - Default, most restrictive
  2. restricted - Legacy restricted
  3. nonroot-v2 - Must run as non-root
  4. nonroot - Legacy non-root
  5. hostnetwork-v2 - Allow host network
  6. hostnetwork - Legacy host network
  7. hostmount-anyuid - Host mounts, any UID
  8. hostaccess - Host access
  9. anyuid - Run as any UID
  10. privileged - Full privileges (avoid!)

Check SCC Assignment

# See which SCC a pod is using
oc get pod ${POD} -n ${NS} -o yaml | grep scc

# List available SCCs
oc get scc

# Describe SCC requirements
oc describe scc restricted-v2

# Check SA SCC permissions
oc adm policy who-can use scc restricted-v2

Grant SCC to Service Account

# Add SCC to SA (requires admin)
oc adm policy add-scc-to-user ${SCC} -z ${SERVICE_ACCOUNT} -n ${NAMESPACE}

# Via RoleBinding (preferred for GitOps)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ${SA}-scc-${SCC}
  namespace: ${NAMESPACE}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:${SCC}
subjects:
  - kind: ServiceAccount
    name: ${SERVICE_ACCOUNT}
    namespace: ${NAMESPACE}

Custom SCC Template

apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: custom-restricted
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
  type: MustRunAs
  ranges:
    - min: 1000
      max: 65534
priority: null
readOnlyRootFilesystem: true
requiredDropCapabilities:
  - ALL
runAsUser:
  type: MustRunAsRange
  uidRangeMin: 1000
  uidRangeMax: 65534
seLinuxContext:
  type: MustRunAs
seccompProfiles:
  - runtime/default
supplementalGroups:
  type: MustRunAs
  ranges:
    - min: 1000
      max: 65534
users: []
groups: []
volumes:
  - configMap
  - downwardAPI
  - emptyDir
  - persistentVolumeClaim
  - projected
  - secret

Image Security

Image Scanning Integration

# Trivy Operator - Automatic vulnerability scanning
apiVersion: aquasecurity.github.io/v1alpha1
kind: VulnerabilityReport
metadata:
  name: ${POD_NAME}-${CONTAINER_NAME}
  namespace: ${NAMESPACE}
spec:
  # Auto-generated by operator
# Manual scan with trivy
trivy image ${IMAGE}:${TAG}

# Scan with severity filter
trivy image --severity HIGH,CRITICAL ${IMAGE}:${TAG}

# Output as JSON for processing
trivy image -f json -o results.json ${IMAGE}:${TAG}

Image Policy (OCP)

apiVersion: config.openshift.io/v1
kind: Image
metadata:
  name: cluster
spec:
  registrySources:
    # Block all registries except allowed
    allowedRegistries:
      - quay.io
      - registry.redhat.io
      - image-registry.openshift-image-registry.svc:5000
    # Or block specific registries
    blockedRegistries:
      - docker.io

Admission Controller for Image Verification

# Kyverno policy example
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-signed-images
spec:
  validationFailureAction: enforce
  rules:
    - name: verify-signature
      match:
        resources:
          kinds:
            - Pod
      verifyImages:
        - imageReferences:
            - "registry.example.com/*"
          attestors:
            - entries:
                - keys:
                    publicKeys: |-
                      -----BEGIN PUBLIC KEY-----
                      ${PUBLIC_KEY}
                      -----END PUBLIC KEY-----

Security Audit Scripts

Cluster Security Scan

#!/bin/bash
# security-audit.sh - Comprehensive security audit

echo "=== Privileged Pods ==="
kubectl get pods -A -o json | jq -r '
  .items[] | select(.spec.containers[].securityContext.privileged==true) |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Pods Running as Root ==="
kubectl get pods -A -o json | jq -r '
  .items[] | select(.spec.securityContext.runAsNonRoot!=true) |
  select(.spec.containers[].securityContext.runAsNonRoot!=true) |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Pods Without Security Context ==="
kubectl get pods -A -o json | jq -r '
  .items[] | select(.spec.securityContext==null) |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Pods Without Resource Limits ==="
kubectl get pods -A -o json | jq -r '
  .items[] | select(.spec.containers[].resources.limits==null) |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Namespaces Without NetworkPolicy ==="
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
  count=$(kubectl get networkpolicy -n $ns --no-headers 2>/dev/null | wc -l)
  if [ "$count" -eq 0 ]; then
    echo "$ns"
  fi
done

echo -e "\n=== ServiceAccounts with Secrets Auto-mounted ==="
kubectl get sa -A -o json | jq -r '
  .items[] | select(.automountServiceAccountToken!=false) |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Secrets in Environment Variables ==="
kubectl get pods -A -o json | jq -r '
  .items[] | select(.spec.containers[].env[]?.valueFrom.secretKeyRef!=null) |
  "\(.metadata.namespace)/\(.metadata.name)"'

RBAC Audit

#!/bin/bash
# rbac-audit.sh - RBAC security audit

echo "=== ClusterRoles with Wildcard Permissions ==="
kubectl get clusterroles -o json | jq -r '
  .items[] | 
  select(.rules[]? | (.apiGroups[]? == "*") or (.resources[]? == "*") or (.verbs[]? == "*")) |
  .metadata.name'

echo -e "\n=== ClusterRoleBindings to default ServiceAccount ==="
kubectl get clusterrolebindings -o json | jq -r '
  .items[] | select(.subjects[]? | .name=="default" and .kind=="ServiceAccount") |
  "\(.metadata.name) -> \(.roleRef.name)"'

echo -e "\n=== Roles Granting Secret Access ==="
kubectl get roles -A -o json | jq -r '
  .items[] | select(.rules[]? | .resources[]? == "secrets") |
  "\(.metadata.namespace)/\(.metadata.name)"'

echo -e "\n=== Users/Groups with cluster-admin ==="
kubectl get clusterrolebindings -o json | jq -r '
  .items[] | select(.roleRef.name=="cluster-admin") |
  .subjects[]? | "\(.kind): \(.name)"'

CIS Benchmark Checks

Key Control Plane Checks

ID Check Command
1.1.1 API server pod spec permissions stat -c %a /etc/kubernetes/manifests/kube-apiserver.yaml
1.2.1 Anonymous auth disabled Check --anonymous-auth=false
1.2.6 RBAC enabled Check --authorization-mode includes RBAC
1.2.16 Audit logging enabled Check --audit-log-path

Key Worker Node Checks

ID Check Command
4.1.1 Kubelet service file permissions stat -c %a /etc/systemd/system/kubelet.service.d/
4.2.1 Anonymous auth disabled Check kubelet config authentication.anonymous.enabled=false
4.2.6 Protect kernel defaults Check --protect-kernel-defaults=true

Automated CIS Scanning

# Using kube-bench
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl logs job/kube-bench

# OpenShift Compliance Operator
oc get compliancescan -n openshift-compliance
oc get compliancecheckresult -n openshift-compliance

Incident Response

Compromised Pod Investigation

# 1. Isolate the pod (NetworkPolicy)
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: isolate-${POD}
  namespace: ${NAMESPACE}
spec:
  podSelector:
    matchLabels:
      $(kubectl get pod ${POD} -n ${NS} -o jsonpath='{.metadata.labels}' | jq -r 'to_entries | map("\(.key): \(.value)") | .[0]')
  policyTypes:
    - Ingress
    - Egress
EOF

# 2. Capture pod state
kubectl get pod ${POD} -n ${NS} -o yaml > pod-evidence.yaml
kubectl describe pod ${POD} -n ${NS} > pod-describe.txt
kubectl logs ${POD} -n ${NS} --all-containers > pod-logs.txt

# 3. Check for suspicious processes
kubectl exec ${POD} -n ${NS} -- ps aux
kubectl exec ${POD} -n ${NS} -- netstat -tulpn

# 4. Check file system changes (if possible)
kubectl exec ${POD} -n ${NS} -- find / -mtime -1 -type f 2>/dev/null

# 5. Review RBAC for the pod's service account
SA=$(kubectl get pod ${POD} -n ${NS} -o jsonpath='{.spec.serviceAccountName}')
kubectl auth can-i --list --as=system:serviceaccount:${NS}:${SA}

Security Event Timeline

# Get events sorted by time
kubectl get events -A --sort-by='.lastTimestamp' -o custom-columns=\
TIME:.lastTimestamp,\
TYPE:.type,\
NAMESPACE:.metadata.namespace,\
REASON:.reason,\
MESSAGE:.message | grep -i "fail\|error\|deny\|forbidden"