Letting Claude Deploy Animal Wiki to a Multi-CSP K8s Architecture

Repo: AnimalWiki - GitOps
An animal taxonomy explorer has been on the back burner of my projects list, my son being the inspiration for this. I have also recently wanted to get experience with Kubernetes in a multi-CSP architecture. Let's kill two birds with one stone.

A Note

With projects nowadays (2026 A.D.), I have to determine if I'm interested in the idea itself realized or the skill that it exercises to determine whether to use LLMs or not. For a NFA-to-DFA converter in C, that's something I should do manually (hopefully I will add an entry on the website for this project soon). For something like AnimalWiki, I'm more interested in the realization and refinement of the idea generally, which makes it a perfect case for using an LLM. However, I don't just want an LLM to help me write the application. I want to go a step further and let it steer application development, automating the deployment so it can push live. Claude, take the wheel.

Overview: The Application

AnimalWiki is an application that allows a user to navigate animal taxonomy as a visual graph. Informative description and Q&A are first priorities to encourage an educational environment. The stack is an ExpressJS frontend, connecting to Python in the backend which queries a Postgres database for taxonomy information.

Overview: The Infrastructure

(STAGE 1: AZURE, STAGE 2: AWS, STAGE 3: GCP)
The infrastructure is a hybrid monorepo (application repo, gitops repo) in Github, where Github Actions co-ordinate application test, build, and push to Github's Container Registry (GHCR). From there, Flux's Image Update Automations will detect changes and modify the GitOps repo, triggering Flux to reconcile. Yes I know this is overkill.

STEP #1: Set up CI

First, I set up the github repos and created a Github Action workflow to run whenever there is a push to main. It first runs a job detect-changes that detects which folders in the monorepo changed (frontend and/or backend). Based on that it runs jobs build-frontend and build-backend accordingly.

          jobs:
            detect-changes:
                ...
            build-frontend:
                needs: detect-changes
                if: needs.detect-changes.outputs.frontend == 'true'
                runs-on: ubuntu-latest
                steps:
                    - uses: actions/checkout@v4

                    - uses: docker/setup-buildx-action@v3

                    - name: Login to GHCR
                      uses: docker/login-action@v3
                      with:
                        registry: ghcr.io
                        username: ${{ github.actor }}
                        password: ${{ secrets.GITHUB_TOKEN }}

                    - name: Build Frontend image
                      uses: docker/build-push-action@v6
                      with:
                        context: ./frontend
                        file: ./frontend/Dockerfile
                        push: true
                        tags: |
                          ${{ env.FRONTEND_IMAGE }}:sha-${{ github.sha }}
                        cache-from: type=gha
                    cache-to: type=gha,mode=max
            build-backend:
                ...
        

STEP #2: Create an Azure AKS Service

My configuration for the AKS service was pretty minimal. I set the node pool to a manually-set node size of 2, using VMs in West US without availability zones. Next, I needed to install the microsoft.flux cluster extension.

What is Flux v2?

Flux is a Continuous Deployment tool that automatically reconciles clusters based on changes in Github repos. This conforms with the GitOps framework of Infrastructure as Code (IaC). The idea is that we should manage our infrastructure from a version controlled repository.

NOTE: Azure had a lot of services to automate some of these things ("Automated AKS", "Automated Deployment", etc.), but I was going for max pain (learning).