Install CloudDefense.AI suite on a Kubernetes cluster

Modified on Thu, 12 Oct 2023 at 02:50 PM



Pre-requisites

There are three main pre-requisites for a production grade cdefense installation on-premises


1. managed Postgres instance (for AWS RDS db.r5.large)
           1.  Enable automated backups
  • 2.
    A kubernetes cluster (/examples/eks) with at least two nodegroups
        1. node group for jobs
  •                    - each node has { label: job }
    • 2.
      node group for all else
        - optional) each node has { label: cdefense }
  • 3.
    A cluster auto-scaler

Install kafka

Download the kafka helm repo (bitnami)
helm repo add bitnami https://charts.bitnami.com/bitnami
  • (optional) create/edit values.yaml
    nodeSelector:
      label: external
  • Install kafka helm
    helm install kafka bitnami/kafka -f values.yaml




Install cdefense


add cdefense helm repo
helm repo add cdefense https://clouddefenseai.github.io/charts/  
  • update repos
    helm repo update
  • clone the repo
    git clone https://github.com/CloudDefenseAI/charts
  • create roles, role binding and service accounts
    kubectl apply -f charts/cdefense/rbac
  • create secrets
    kubectl apply -f charts/cdefense/secrets
  • Install cdefense helm
    helm install cdefense cdefense --debug
    or
    helm upgrade cdefense cdefense/cdefense --debug


Configure Social Authentication

In order to sign in with different identity providers (for ex. github), create ID and secrets

Github


  • create a New OAuth App
  • Homepage URL is the base_url


create secrets for authservice

create a secret for authservice
apiVersion: v1
kind: Secret
metadata:
  name: authservice-secrets
  type: Opaque
stringData:
  SENDGRID_KEY: 
  GOOGLE_CLIENT_ID: 
  GOOGLE_CLIENT_SECRET: 
  GITHUB_CLIENT_ID: 
  GITHUB_CLIENT_SECRET: 
  GITLAB_APPLICATION_ID: 
  GITLAB_APPLICATION_SECRET: 
  BITBUCKET_KEY: 
  BITBUCKET_SECRET: 
  MICROSOFT_CLIENT_ID: 
  MICROSOFT_CLIENT_SECRET: 
kubectl apply -f authservice-secrets.yaml
  • restart authservice pod

How to change location of logs

    • update value.yaml
      api:
        logs: 
          region: <REGION>
          bucket: <BUCKET>

in case of private bucket

    • Edit the scan-server-secrets.yaml file
        AWS_SCAN_S3_ACCESS_KEY: <AWS_SCAN_S3_ACCESS_KEY>
        AWS_SCAN_S3_SECRET_KEY: <AWS_SCAN_S3_SECRET_KEY>
      kubectl apply -f scan-server-secrets.yaml
    • or update secrets on cluster
        • encode values as base64 strings
      AWS_SCAN_S3_ACCESS_KEY=<AWS_ACCESS_KEY>
      BASE64_AWS_SCAN_S3_ACCESS_KEY=$(echo $AWS_SCAN_S3_ACCESS_KEY | base64)
      AWS_SCAN_S3_SECRET_KEY=<AWS_SECRET_KEY>
      BASE64_AWS_SCAN_S3_ACCESS_KEY=$(echo $AWS_SCAN_S3_SECRET_KEY | base64)
        • edit scan-server-secrets
      kubectl edit secret scan-server-secrets
        AWS_SCAN_S3_ACCESS_KEY: <BASE64_AWS_SCAN_S3_ACCESS_KEY>
        AWS_SCAN_S3_SECRET_KEY: <BASE64_AWS_SCAN_S3_SECRET_KEY>
    • save and restart api pod
      kubectl delete pod api-<some-string>
 

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article