Claude Code Plugins

Community-maintained marketplace

Feedback

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name k8s-gitops
description GitOps workflows and CI/CD pipeline integration for Kubernetes and OpenShift. Use this skill when: (1) Setting up ArgoCD or Flux for GitOps deployment (2) Creating CI/CD pipelines for K8s workloads (GitHub Actions, GitLab CI, Tekton) (3) Implementing progressive delivery (Canary, Blue-Green, A/B testing) (4) Configuring Kustomize overlays for multi-environment deployments (5) Creating Helm charts or managing Helm releases (6) Setting up image automation and promotion workflows (7) Implementing policy-as-code (Kyverno, OPA Gatekeeper) (8) Secret management in GitOps (Sealed Secrets, External Secrets, SOPS) (9) Multi-cluster GitOps configurations (10) OpenShift Pipelines (Tekton) and GitOps Operator setup

Kubernetes / OpenShift GitOps & CI/CD

Command Usage Convention

IMPORTANT: This skill uses kubectl as the primary command in all examples. When working with:

  • OpenShift/ARO clusters: Replace all kubectl commands with oc
  • Standard Kubernetes clusters (AKS, EKS, GKE, etc.): Use kubectl as shown

The agent will automatically detect the cluster type and use the appropriate command.

GitOps workflows, CI/CD integration, and progressive delivery patterns for cluster-code managed clusters.

GitOps Principles

  1. Declarative: Entire system described declaratively in Git
  2. Versioned: Git as single source of truth with history
  3. Automated: Changes automatically applied to cluster
  4. Auditable: All changes tracked via Git commits

ArgoCD Setup

Installation

# Create namespace
kubectl create namespace argocd

# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Wait for pods
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=300s

# Get initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

# Access UI (port-forward for testing)
kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: ${APP_NAME}
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: ${GIT_REPO_URL}
    targetRevision: ${BRANCH:-main}
    path: ${MANIFEST_PATH}
    # For Kustomize
    kustomize:
      namePrefix: ${PREFIX:-}
      nameSuffix: ${SUFFIX:-}
      images:
        - ${IMAGE_NAME}=${NEW_IMAGE}:${TAG}
    # For Helm
    # helm:
    #   releaseName: ${RELEASE_NAME}
    #   valueFiles:
    #     - values-${ENV}.yaml
    #   parameters:
    #     - name: image.tag
    #       value: ${TAG}
  destination:
    server: https://kubernetes.default.svc
    namespace: ${NAMESPACE}
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      allowEmpty: false
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
      - PruneLast=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
  revisionHistoryLimit: 10

ArgoCD ApplicationSet (Multi-Environment)

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: ${APP_NAME}-appset
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - env: dev
            namespace: ${APP_NAME}-dev
            cluster: https://kubernetes.default.svc
          - env: staging
            namespace: ${APP_NAME}-staging
            cluster: https://kubernetes.default.svc
          - env: prod
            namespace: ${APP_NAME}-prod
            cluster: https://prod-cluster.example.com
  template:
    metadata:
      name: '${APP_NAME}-{{env}}'
    spec:
      project: default
      source:
        repoURL: ${GIT_REPO_URL}
        targetRevision: main
        path: 'overlays/{{env}}'
      destination:
        server: '{{cluster}}'
        namespace: '{{namespace}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

ArgoCD Project (RBAC)

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: ${PROJECT_NAME}
  namespace: argocd
spec:
  description: ${DESCRIPTION}
  sourceRepos:
    - ${GIT_REPO_URL}
    - 'https://github.com/org/*'
  destinations:
    - namespace: '${NAMESPACE_PREFIX}-*'
      server: https://kubernetes.default.svc
    - namespace: '*'
      server: https://prod-cluster.example.com
  clusterResourceWhitelist:
    - group: ''
      kind: Namespace
  namespaceResourceBlacklist:
    - group: ''
      kind: ResourceQuota
    - group: ''
      kind: LimitRange
  roles:
    - name: developer
      description: Developer access
      policies:
        - p, proj:${PROJECT_NAME}:developer, applications, get, ${PROJECT_NAME}/*, allow
        - p, proj:${PROJECT_NAME}:developer, applications, sync, ${PROJECT_NAME}/*, allow
      groups:
        - ${DEV_GROUP}

Flux CD Setup

Installation

# Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo bash

# Bootstrap with GitHub
flux bootstrap github \
  --owner=${GITHUB_ORG} \
  --repository=${REPO_NAME} \
  --branch=main \
  --path=clusters/${CLUSTER_NAME} \
  --personal

Flux GitRepository

apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: ${APP_NAME}
  namespace: flux-system
spec:
  interval: 1m
  url: ${GIT_REPO_URL}
  ref:
    branch: main
  secretRef:
    name: ${GIT_SECRET}

Flux Kustomization

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: ${APP_NAME}
  namespace: flux-system
spec:
  interval: 10m
  targetNamespace: ${NAMESPACE}
  sourceRef:
    kind: GitRepository
    name: ${APP_NAME}
  path: ./overlays/${ENV}
  prune: true
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: ${APP_NAME}
      namespace: ${NAMESPACE}
  timeout: 2m
  postBuild:
    substitute:
      ENV: ${ENV}
      IMAGE_TAG: ${TAG}

Flux Image Automation

# Image Repository (watch for new tags)
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
  name: ${APP_NAME}
  namespace: flux-system
spec:
  image: ${REGISTRY}/${IMAGE_NAME}
  interval: 1m
  secretRef:
    name: ${REGISTRY_SECRET}
---
# Image Policy (select which tags to use)
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
  name: ${APP_NAME}
  namespace: flux-system
spec:
  imageRepositoryRef:
    name: ${APP_NAME}
  policy:
    semver:
      range: '>=1.0.0'
    # Or use timestamp-based
    # alphabetical:
    #   order: asc
    # numerical:
    #   order: asc
---
# Image Update Automation
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
  name: ${APP_NAME}
  namespace: flux-system
spec:
  interval: 1m
  sourceRef:
    kind: GitRepository
    name: ${APP_NAME}
  git:
    checkout:
      ref:
        branch: main
    commit:
      author:
        email: flux@example.com
        name: Flux
      messageTemplate: 'chore: update {{.AutomationObject.Name}} to {{.NewTag}}'
    push:
      branch: main
  update:
    path: ./overlays
    strategy: Setters

Kustomize

Base Structure

app/
├── base/
│   ├── kustomization.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   └── configmap.yaml
└── overlays/
    ├── dev/
    │   ├── kustomization.yaml
    │   └── patches/
    ├── staging/
    │   ├── kustomization.yaml
    │   └── patches/
    └── prod/
        ├── kustomization.yaml
        └── patches/

Base kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
  name: ${APP_NAME}

resources:
  - deployment.yaml
  - service.yaml
  - configmap.yaml

commonLabels:
  app.kubernetes.io/name: ${APP_NAME}
  app.kubernetes.io/managed-by: kustomize

images:
  - name: ${APP_NAME}
    newName: ${REGISTRY}/${IMAGE_NAME}
    newTag: latest

Overlay kustomization.yaml (Production)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: ${APP_NAME}-prod

resources:
  - ../../base
  - hpa.yaml
  - pdb.yaml
  - networkpolicy.yaml

namePrefix: prod-

commonLabels:
  environment: production

images:
  - name: ${APP_NAME}
    newName: ${REGISTRY}/${IMAGE_NAME}
    newTag: v1.2.3  # Pin to specific version

replicas:
  - name: ${APP_NAME}
    count: 3

patches:
  - path: patches/resources.yaml
  - path: patches/probes.yaml
  - target:
      kind: Deployment
      name: ${APP_NAME}
    patch: |-
      - op: add
        path: /spec/template/spec/affinity
        value:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: ${APP_NAME}
                topologyKey: kubernetes.io/hostname

configMapGenerator:
  - name: ${APP_NAME}-config
    behavior: merge
    literals:
      - LOG_LEVEL=warn
      - ENABLE_DEBUG=false

secretGenerator:
  - name: ${APP_NAME}-secrets
    type: Opaque
    files:
      - secrets/api-key.txt
    options:
      disableNameSuffixHash: true

Resource Patch Example

# patches/resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${APP_NAME}
spec:
  template:
    spec:
      containers:
        - name: ${APP_NAME}
          resources:
            requests:
              cpu: 500m
              memory: 512Mi
            limits:
              cpu: 2000m
              memory: 2Gi

Helm

Chart Structure

${CHART_NAME}/
├── Chart.yaml
├── values.yaml
├── values-dev.yaml
├── values-prod.yaml
├── templates/
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── configmap.yaml
│   ├── secret.yaml
│   ├── hpa.yaml
│   ├── pdb.yaml
│   └── NOTES.txt
└── charts/           # Dependencies

Chart.yaml

apiVersion: v2
name: ${CHART_NAME}
description: ${DESCRIPTION}
type: application
version: 0.1.0
appVersion: "1.0.0"
maintainers:
  - name: ${MAINTAINER}
    email: ${EMAIL}
dependencies:
  - name: postgresql
    version: "12.x.x"
    repository: "https://charts.bitnami.com/bitnami"
    condition: postgresql.enabled

values.yaml

# Default values
replicaCount: 1

image:
  repository: ${REGISTRY}/${IMAGE_NAME}
  tag: "latest"
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

serviceAccount:
  create: true
  name: ""
  annotations: {}

service:
  type: ClusterIP
  port: 80
  targetPort: 8080

ingress:
  enabled: false
  className: nginx
  annotations: {}
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: Prefix
  tls: []

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

nodeSelector: {}
tolerations: []
affinity: {}

# Application-specific
config:
  logLevel: info
  environment: development

# External services
postgresql:
  enabled: false
  auth:
    database: app
    username: app

Template Example (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "${CHART_NAME}.fullname" . }}
  labels:
    {{- include "${CHART_NAME}.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "${CHART_NAME}.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
      labels:
        {{- include "${CHART_NAME}.selectorLabels" . | nindent 8 }}
    spec:
      serviceAccountName: {{ include "${CHART_NAME}.serviceAccountName" . }}
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL
          ports:
            - name: http
              containerPort: {{ .Values.service.targetPort }}
              protocol: TCP
          env:
            - name: LOG_LEVEL
              value: {{ .Values.config.logLevel | quote }}
          envFrom:
            - configMapRef:
                name: {{ include "${CHART_NAME}.fullname" . }}-config
          livenessProbe:
            httpGet:
              path: /healthz
              port: http
          readinessProbe:
            httpGet:
              path: /ready
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          volumeMounts:
            - name: tmp
              mountPath: /tmp
      volumes:
        - name: tmp
          emptyDir: {}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}

CI/CD Pipelines

GitHub Actions

# .github/workflows/ci-cd.yaml
name: CI/CD Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run tests
        run: |
          # Add test commands

  build:
    needs: test
    runs-on: ubuntu-latest
    outputs:
      image-tag: ${{ steps.meta.outputs.tags }}
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      
      - name: Login to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      
      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix=
            type=ref,event=branch
            type=semver,pattern={{version}}
      
      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy-dev:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/develop'
    steps:
      - uses: actions/checkout@v4
      
      - name: Update Kustomize image tag
        run: |
          cd overlays/dev
          kustomize edit set image ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:${{ github.sha }}
      
      - name: Commit and push
        run: |
          git config user.name "GitHub Actions"
          git config user.email "actions@github.com"
          git add .
          git commit -m "chore: update dev image to ${{ github.sha }}"
          git push

  deploy-prod:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    environment: production
    steps:
      - uses: actions/checkout@v4
      
      - name: Update Kustomize image tag
        run: |
          cd overlays/prod
          kustomize edit set image ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:${{ github.sha }}
      
      - name: Create PR for production
        uses: peter-evans/create-pull-request@v5
        with:
          title: "Deploy ${{ github.sha }} to production"
          body: "Auto-generated PR to deploy commit ${{ github.sha }}"
          branch: deploy/prod-${{ github.sha }}

Tekton Pipeline (OpenShift)

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy
  namespace: ${NAMESPACE}
spec:
  params:
    - name: git-url
      type: string
    - name: git-revision
      type: string
      default: main
    - name: image-name
      type: string
    - name: deployment-namespace
      type: string
  
  workspaces:
    - name: shared-workspace
    - name: docker-credentials
  
  tasks:
    - name: git-clone
      taskRef:
        name: git-clone
        kind: ClusterTask
      params:
        - name: url
          value: $(params.git-url)
        - name: revision
          value: $(params.git-revision)
      workspaces:
        - name: output
          workspace: shared-workspace
    
    - name: build-image
      taskRef:
        name: buildah
        kind: ClusterTask
      runAfter:
        - git-clone
      params:
        - name: IMAGE
          value: $(params.image-name):$(tasks.git-clone.results.commit)
        - name: DOCKERFILE
          value: ./Dockerfile
      workspaces:
        - name: source
          workspace: shared-workspace
        - name: dockerconfig
          workspace: docker-credentials
    
    - name: update-manifest
      taskRef:
        name: kustomize-update
      runAfter:
        - build-image
      params:
        - name: image
          value: $(params.image-name):$(tasks.git-clone.results.commit)
        - name: overlay-path
          value: overlays/$(params.deployment-namespace)
      workspaces:
        - name: source
          workspace: shared-workspace
    
    - name: deploy
      taskRef:
        name: kubernetes-actions
        kind: ClusterTask
      runAfter:
        - update-manifest
      params:
        - name: script
          value: |
            kubectl apply -k overlays/$(params.deployment-namespace)
            kubectl rollout status deployment/${APP_NAME} -n $(params.deployment-namespace)

Progressive Delivery

Argo Rollouts - Canary

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: ${APP_NAME}
  namespace: ${NAMESPACE}
spec:
  replicas: 5
  selector:
    matchLabels:
      app: ${APP_NAME}
  template:
    metadata:
      labels:
        app: ${APP_NAME}
    spec:
      containers:
        - name: ${APP_NAME}
          image: ${IMAGE}:${TAG}
          ports:
            - containerPort: 8080
  strategy:
    canary:
      canaryService: ${APP_NAME}-canary
      stableService: ${APP_NAME}-stable
      trafficRouting:
        nginx:
          stableIngress: ${APP_NAME}-ingress
      steps:
        - setWeight: 5
        - pause: {duration: 2m}
        - setWeight: 20
        - pause: {duration: 5m}
        - setWeight: 50
        - pause: {duration: 5m}
        - setWeight: 80
        - pause: {duration: 5m}
      analysis:
        templates:
          - templateName: success-rate
        startingStep: 2
        args:
          - name: service-name
            value: ${APP_NAME}-canary
---
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
  name: success-rate
spec:
  args:
    - name: service-name
  metrics:
    - name: success-rate
      interval: 1m
      successCondition: result[0] >= 0.95
      failureLimit: 3
      provider:
        prometheus:
          address: http://prometheus:9090
          query: |
            sum(rate(http_requests_total{service="{{args.service-name}}",status=~"2.."}[5m]))
            /
            sum(rate(http_requests_total{service="{{args.service-name}}"}[5m]))

Argo Rollouts - Blue-Green

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: ${APP_NAME}
  namespace: ${NAMESPACE}
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ${APP_NAME}
  template:
    metadata:
      labels:
        app: ${APP_NAME}
    spec:
      containers:
        - name: ${APP_NAME}
          image: ${IMAGE}:${TAG}
  strategy:
    blueGreen:
      activeService: ${APP_NAME}-active
      previewService: ${APP_NAME}-preview
      autoPromotionEnabled: false
      scaleDownDelaySeconds: 30
      prePromotionAnalysis:
        templates:
          - templateName: smoke-test
      postPromotionAnalysis:
        templates:
          - templateName: success-rate
        args:
          - name: service-name
            value: ${APP_NAME}-active

Policy as Code

Kyverno Policies

# Require resource limits
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-resource-limits
spec:
  validationFailureAction: enforce
  rules:
    - name: require-limits
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Resource limits are required"
        pattern:
          spec:
            containers:
              - resources:
                  limits:
                    memory: "?*"
                    cpu: "?*"
---
# Add default labels
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-labels
spec:
  rules:
    - name: add-labels
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          metadata:
            labels:
              managed-by: cluster-code
---
# Require non-root
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-run-as-non-root
spec:
  validationFailureAction: enforce
  rules:
    - name: run-as-non-root
      match:
        resources:
          kinds:
            - Pod
      validate:
        message: "Containers must run as non-root"
        pattern:
          spec:
            securityContext:
              runAsNonRoot: true
            containers:
              - securityContext:
                  allowPrivilegeEscalation: false

OPA Gatekeeper

# Constraint Template
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        openAPIV3Schema:
          type: object
          properties:
            labels:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("Missing required labels: %v", [missing])
        }
---
# Constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: require-app-labels
spec:
  match:
    kinds:
      - apiGroups: ["apps"]
        kinds: ["Deployment"]
  parameters:
    labels:
      - "app.kubernetes.io/name"
      - "app.kubernetes.io/managed-by"