feat(k8s): add kubernetes deployment manifests and documentation#531
feat(k8s): add kubernetes deployment manifests and documentation#531b-mackenzie-alexander wants to merge 3 commits intoplastic-labs:mainfrom
Conversation
Adds /k8s with Kustomize-based manifests for all Honcho services, addressing the k8s deployment gap documented in plastic-labs#514: - postgres: StatefulSet (pgvector/pgvector:pg15, 10 Gi PVC) with headless Service, differentiated readiness (exec pg_isready) and liveness (tcpSocket) probes - redis: StatefulSet (redis:8.2, 2 Gi PVC) with headless Service, differentiated probes - honcho-api: Deployment (HPA-managed replicas), ClusterIP Service, HPA (1–5 replicas at 70% CPU), PDB (maxUnavailable: 1); httpGet readiness + tcpSocket liveness - honcho-deriver: Deployment with no HTTP probe — the deriver is a queue worker; health managed by the Deployment restart policy Security posture: - NetworkPolicies: default-deny ingress + explicit allow rules; postgres and redis reachable only from API and deriver pods - Pod security: runAsNonRoot, readOnlyRootFilesystem, allowPrivilegeEscalation: false, capabilities.drop: ALL, seccompProfile: RuntimeDefault, automountServiceAccountToken: false - Init containers (busybox wait-for-deps) run as UID/GID 65534 - Secrets (DB URI, JWT secret, LLM keys) in a gitignored Secret; non-sensitive config in a ConfigMap README covers prerequisites, per-distro image loading (kind, k3s, Docker Desktop), secrets setup, deploy, verify, useful commands, autoscaling, networking, and production notes. References plastic-labs#514 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds tests/test_k8s_manifests.py — a pytest suite that renders the
Kustomize output with kubectl kustomize and asserts structural
correctness, security posture, and architectural invariants without
requiring a running cluster.
Run with:
pytest tests/test_k8s_manifests.py --noconftest -p no:xdist \
--override-ini="addopts=" -v
31 tests covering: namespace isolation, all 4 NetworkPolicies with
exact source/port assertions, headless Services, HPA exact contract
(minReplicas=1, maxReplicas=5, CPU=70%), PDB maxUnavailable semantics,
deriver has no HTTP probes, API uses differentiated probe types,
pod/container security contexts, init containers run as nobody,
no service account tokens, ConfigMap contains no secret keys, DB URI
sourced from Secret, and all containers have resource limits.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
WalkthroughThis PR introduces complete Kubernetes infrastructure for Honcho deployment: namespace, Kustomize entrypoint, ConfigMap/Secret templates, NetworkPolicies, API and deriver Deployments, Postgres/Redis StatefulSets, autoscaling/PDB, manifest tests, and supporting docs and examples. Changes
Sequence Diagram(s)sequenceDiagram
participant Kubectl as Kubectl/kustomize
participant Kustomize as Kustomize (render)
participant API as honcho-api Pod
participant Deriver as honcho-deriver Pod
participant Postgres as Postgres StatefulSet
participant Redis as Redis StatefulSet
Kubectl->>Kustomize: render k8s/
Kustomize-->>Kubectl: YAML manifests
Kubectl->>API: apply Deployment
API->>Postgres: initContainer polls TCP 5432
Postgres-->>API: TCP ACK
API->>Redis: initContainer polls TCP 6379
Redis-->>API: TCP ACK
API->>API: start main container (probes configured)
API-->>Kubectl: Pod Ready
sequenceDiagram
participant Client as External Client
participant NP as NetworkPolicy
participant API as honcho-api
participant Deriver as honcho-deriver
participant PG as postgres
participant RS as redis
Client->>NP: Request -> API:8000
NP->>API: allow-api-ingress (TCP 8000)
API-->>Client: Respond
API->>NP: Request -> PG:5432
NP->>PG: allow-postgres-from-honcho (from API/Deriver)
PG-->>API: Respond
Deriver->>NP: Request -> RS:6379
NP->>RS: allow-redis-from-honcho (from API/Deriver)
RS-->>Deriver: Respond
Client->>NP: Direct Request -> PG:5432
NP-->>Client: Deny (default-deny-ingress)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (3)
k8s/deriver/deployment.yaml (1)
82-82: Prefer an immutable image reference overlatest.Line 82 uses a mutable tag, which can produce non-deterministic rollouts.
Suggested fix
- image: honcho:latest + image: honcho:<immutable-tag-or-digest>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s/deriver/deployment.yaml` at line 82, Replace the mutable image tag "honcho:latest" in the Deployment spec's container image reference with an immutable reference (a specific version tag or an image digest) so deployments are deterministic; update the container image field (where image: honcho:latest is set) to use either honcho:<semver> or honcho@sha256:<digest> and ensure any CI/image-build output is synchronized to provide that exact tag/digest when applied.tests/test_k8s_manifests.py (1)
511-519: Extend the resource assertions to coverephemeral-storage.The manifests in this PR set
ephemeral-storagerequests/limits on workloads, but this helper only guards CPU and memory. A regression there would currently pass the suite.♻️ Proposed fix
def _assert_has_resource_limits(container: dict[str, Any], label: str): resources = container.get("resources", {}) assert "limits" in resources, f"{label}: missing resource limits" assert "requests" in resources, f"{label}: missing resource requests" for kind in ("limits", "requests"): - for field in ("memory", "cpu"): + for field in ("memory", "cpu", "ephemeral-storage"): assert field in resources[kind], ( f"{label}: missing resources.{kind}.{field}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_k8s_manifests.py` around lines 511 - 519, The helper _assert_has_resource_limits currently checks only "memory" and "cpu"; update it to also require "ephemeral-storage" in both resources["limits"] and resources["requests"]. In the function _assert_has_resource_limits(container: dict[str, Any], label: str) add "ephemeral-storage" to the tuple of fields iterated (or otherwise assert its presence for both "limits" and "requests") so the test fails if ephemeral-storage is missing from either limits or requests.k8s/api/deployment.yaml (1)
95-101: Pin the API image to an immutable tag or digest.
honcho:latestwithIfNotPresentmakes rollouts non-reproducible and can leave nodes on stale bits after the same tag is rebuilt. Prefer a versioned tag or digest here, then override it via Kustomize for local workflows if needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s/api/deployment.yaml` around lines 95 - 101, The deployment uses an mutable image tag "honcho:latest" with imagePullPolicy IfNotPresent; update the container image field in the Deployment to a fixed, immutable identifier (e.g. a versioned tag like honcho:v2.1.1 or an image digest like honcho@sha256:...) and retain or set imagePullPolicy appropriately (IfNotPresent is fine for immutable tags); if you need local overrides, handle environment-specific image substitution via Kustomize overlays or tooling rather than leaving the manifest pinned to :latest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@k8s/api/deployment.yaml`:
- Around line 208-214: The securityContext currently sets runAsNonRoot: true but
the image switches to a named user ("app") created with adduser --system, which
can cause pod rejection; either set an explicit numeric UID via runAsUser in the
deployment's securityContext (e.g., runAsUser: 1001) to match the image, or
change the Dockerfile to create and switch to a numeric user (use adduser --uid
1001 ... and USER 1001) so runAsNonRoot is reliable; update references to
securityContext, runAsNonRoot, runAsUser, and the Dockerfile USER/adduser lines
accordingly.
In `@k8s/api/hpa.yaml`:
- Around line 4-6: The two lines about k3s in the metrics-server comment
conflict; edit the comment block in k8s/api/hpa.yaml to keep one authoritative
statement (preferably "k3s: included by default") and remove the conflicting
"apply metrics-server" instruction so operators only see a single clear
prerequisite for k3s; update the surrounding comment text accordingly to
preserve Docker Desktop and kind guidance.
In `@k8s/deriver/deployment.yaml`:
- Around line 29-33: The init container wait loops using "until nc -z postgres
5432; do" (and the second similar loop at lines 58-62) must be converted to
bounded waits that fail fast: add a timeout mechanism (e.g., a MAX_RETRIES
counter or use the timeout command) that increments each loop iteration and, if
exceeded, prints an error and exits non-zero so the pod fails instead of hanging
in Init; update both occurrences (the loop starting with "until nc -z postgres
5432; do" and the later identical wait loop) to implement this behavior and
return a non-zero exit when the timeout is reached.
In `@k8s/postgres/statefulset.yaml`:
- Around line 19-21: The Postgres StatefulSet currently allows automatic
mounting of the service-account token; add automountServiceAccountToken: false
to the pod spec to disable it. Locate the StatefulSet pod template (under
spec.template.spec) that contains the containers list with name: postgres and
add the field automountServiceAccountToken: false at that same spec level so the
Postgres pods do not receive Kubernetes API credentials.
In `@k8s/README.md`:
- Line 140: The sentence "Kustomize applies all resources in dependency order."
overstates kubectl apply -k guarantees; change that line to state that Kustomize
assembles and applies resources but does not guarantee dependency ordering, and
suggest concrete alternatives such as using kubectl apply -k with --wait or
kubectl wait (or readiness probes/InitContainers/helm hooks) to ensure pods come
up in the desired order; update the text that follows "Watch pods come up:" to
reflect these recommendations.
In `@k8s/redis/statefulset.yaml`:
- Around line 19-23: The Pod spec for the Redis StatefulSet currently leaves
automounting the service account token enabled; update the Pod spec (under
spec.template.spec) to explicitly disable it by setting
automountServiceAccountToken to false so the Redis pod (container name: redis,
image: redis:8.2) does not mount API credentials. Locate the StatefulSet pod
template (spec.template.spec) that contains the containers array and add the
automountServiceAccountToken: false field at that same spec level to ensure
tokens are not mounted.
In `@k8s/secrets.yaml.example`:
- Around line 25-27: The comment incorrectly says the hostname 'postgres'
resolves to a ClusterIP Service while the manifest actually creates a headless
Service (clusterIP: None); update the comment above DB_CONNECTION_URI to reflect
that 'postgres' resolves via the headless Service DNS (direct pod endpoints) or
simply state "resolves to the headless Service 'postgres' (clusterIP: None)" so
it correctly matches the Service configuration and expected DNS behavior.
- Around line 35-39: The documentation is inconsistent: optional secret keys
LLM_ANTHROPIC_API_KEY, LLM_GEMINI_API_KEY, and LLM_OPENAI_API_KEY are shown as
empty strings (which still inject empty env vars) while the text says "unset
keys will simply not be passed." Update k8s/secrets.yaml.example to either
remove/comment out those key lines (or use YAML null) so unset keys are truly
omitted, and change the explanatory sentence to instruct users to remove or
comment the LLM_ANTHROPIC_API_KEY / LLM_GEMINI_API_KEY / LLM_OPENAI_API_KEY
entries to avoid injecting empty environment variables.
In `@tests/test_k8s_manifests.py`:
- Around line 27-37: The subprocess.run call that invokes kubectl kustomize
should include a timeout and explicitly handle subprocess.TimeoutExpired; update
the call in tests/test_k8s_manifests.py (the code around subprocess.run(...)
that assigns result) to pass a sensible timeout (e.g., timeout=30) and add an
except subprocess.TimeoutExpired block that fails the test with
pytest.fail("kubectl kustomize timed out") (or similar), while still keeping the
existing FileNotFoundError handling and the final assert on result.returncode.
---
Nitpick comments:
In `@k8s/api/deployment.yaml`:
- Around line 95-101: The deployment uses an mutable image tag "honcho:latest"
with imagePullPolicy IfNotPresent; update the container image field in the
Deployment to a fixed, immutable identifier (e.g. a versioned tag like
honcho:v2.1.1 or an image digest like honcho@sha256:...) and retain or set
imagePullPolicy appropriately (IfNotPresent is fine for immutable tags); if you
need local overrides, handle environment-specific image substitution via
Kustomize overlays or tooling rather than leaving the manifest pinned to
:latest.
In `@k8s/deriver/deployment.yaml`:
- Line 82: Replace the mutable image tag "honcho:latest" in the Deployment
spec's container image reference with an immutable reference (a specific version
tag or an image digest) so deployments are deterministic; update the container
image field (where image: honcho:latest is set) to use either honcho:<semver> or
honcho@sha256:<digest> and ensure any CI/image-build output is synchronized to
provide that exact tag/digest when applied.
In `@tests/test_k8s_manifests.py`:
- Around line 511-519: The helper _assert_has_resource_limits currently checks
only "memory" and "cpu"; update it to also require "ephemeral-storage" in both
resources["limits"] and resources["requests"]. In the function
_assert_has_resource_limits(container: dict[str, Any], label: str) add
"ephemeral-storage" to the tuple of fields iterated (or otherwise assert its
presence for both "limits" and "requests") so the test fails if
ephemeral-storage is missing from either limits or requests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4cf8d485-f70d-43f1-b5ac-d954c5a51838
📒 Files selected for processing (18)
.gitignorek8s/README.mdk8s/api/deployment.yamlk8s/api/hpa.yamlk8s/api/pdb.yamlk8s/api/service.yamlk8s/configmap.yamlk8s/deriver/deployment.yamlk8s/kustomization.yamlk8s/namespace.yamlk8s/network-policies.yamlk8s/postgres/configmap.yamlk8s/postgres/service.yamlk8s/postgres/statefulset.yamlk8s/redis/service.yamlk8s/redis/statefulset.yamlk8s/secrets.yaml.exampletests/test_k8s_manifests.py
| securityContext: | ||
| # The Dockerfile creates a non-root 'app' user via adduser --system | ||
| # (UID in the 100–999 range). runAsNonRoot enforces this without | ||
| # hardcoding the UID, which varies by image build. | ||
| # Note: kube-score recommends UID > 10000; achieving that requires | ||
| # a Dockerfile change (adduser --uid 10001 app). | ||
| runAsNonRoot: true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
fd -i 'dockerfile' | while read -r f; do
echo "== $f =="
nl -ba "$f" | rg -n '\b(FROM|USER|adduser|useradd)\b'
echo
doneRepository: plastic-labs/honcho
Length of output: 137
🏁 Script executed:
fd -i 'dockerfile' --type fRepository: plastic-labs/honcho
Length of output: 73
🏁 Script executed:
cat -n k8s/api/deployment.yaml | sed -n '208,214p'Repository: plastic-labs/honcho
Length of output: 529
🏁 Script executed:
# Find Dockerfile and extract USER/adduser lines more carefully
find . -iname 'dockerfile' -o -iname 'dockerfile.*' | head -20Repository: plastic-labs/honcho
Length of output: 75
🏁 Script executed:
cat -n DockerfileRepository: plastic-labs/honcho
Length of output: 2368
Either set runAsUser in the security context or change the Dockerfile to use a numeric USER.
The Dockerfile creates the app user via adduser --system (line 39) and switches to it with USER app (line 50). However, runAsNonRoot: true is only guaranteed safe if the image's final USER is numeric. Non-numeric user names can cause Kubernetes to reject the pod in certain configurations. Set an explicit runAsUser here, or update the Dockerfile to use USER 1001 (or another numeric UID > 1000) instead of USER app.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@k8s/api/deployment.yaml` around lines 208 - 214, The securityContext
currently sets runAsNonRoot: true but the image switches to a named user ("app")
created with adduser --system, which can cause pod rejection; either set an
explicit numeric UID via runAsUser in the deployment's securityContext (e.g.,
runAsUser: 1001) to match the image, or change the Dockerfile to create and
switch to a numeric user (use adduser --uid 1001 ... and USER 1001) so
runAsNonRoot is reliable; update references to securityContext, runAsNonRoot,
runAsUser, and the Dockerfile USER/adduser lines accordingly.
Manifests: - hpa.yaml: remove conflicting k3s comment; k3s bundles metrics-server by default, only kind needs a manual install - api/deployment.yaml, deriver/deployment.yaml: replace unbounded init-container wait loops with bounded retries (60 × 2 s = 120 s max); exit non-zero if the dependency is still unreachable so the pod fails fast instead of hanging in Init indefinitely - postgres/statefulset.yaml, redis/statefulset.yaml: add automountServiceAccountToken: false — these pods don't call the k8s API - secrets.yaml.example: correct "ClusterIP Service" → "headless Service (clusterIP: None)"; comment out empty LLM key lines so omitted keys are truly absent rather than injected as empty strings README: - Correct the claim that "Kustomize applies resources in dependency order" — Kustomize does not guarantee ordering; startup ordering is handled by the init containers in the API and Deriver Deployments Tests: - manifests fixture: add timeout=30 to subprocess.run; catch subprocess.TimeoutExpired with pytest.fail - _assert_has_resource_limits: add ephemeral-storage to the checked fields (all containers already set it; the test now enforces it) - test_deployments_no_service_account_token: renamed to test_all_workloads_no_service_account_token and extended to cover postgres and redis StatefulSets Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/test_k8s_manifests.py (1)
45-56: Consider returningNoneinstead of asserting in helper function.
_by_kind_nameusesassertto enforce exactly one match, which raisesAssertionErrorif zero or multiple matches are found. While this works for the current tests, returningNonefor zero matches would allow tests to use more descriptive assertions with custom messages.However, this is a minor style preference—the current implementation is functionally correct and the assertion messages are clear.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_k8s_manifests.py` around lines 45 - 56, The helper _by_kind_name currently asserts exactly one match; change it to return None when no matches are found (so callers/tests can make explicit assertions/messages) and keep a clear error for multiple matches (raise a ValueError or AssertionError with the existing message like f"Expected exactly 1 {kind}/{name}, got {len(matches)}"). Update the function _by_kind_name to check len(matches) and return matches[0] for one match, return None for zero, and raise for >1 so callers can handle the None case and write descriptive assertions.k8s/api/deployment.yaml (1)
220-233: Documented trade-off: Consider setting explicitrunAsUserfor stricter enforcement.The inline comments (lines 221-225) document that
runAsNonRoot: trueis used without hardcoding a UID because the Dockerfile creates theappuser withadduser --system(UID in 100-999 range). This works but has caveats:
- Some admission controllers may reject pods where the image's final
USERis non-numeric- kube-score recommends UID > 10000 to avoid conflicts with host system users
For stricter compliance, consider updating the Dockerfile to use a fixed UID (e.g.,
adduser --uid 10001 app) and then settingrunAsUser: 10001here.The current approach is functional and the trade-off is documented, so this is informational rather than blocking.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k8s/api/deployment.yaml` around lines 220 - 233, Update the image and manifest to enforce a fixed non-root UID: modify the Dockerfile to create the app user with a deterministic high UID (e.g., use adduser --uid 10001 app) and then set securityContext.runAsUser: 10001 in the Deployment (keep runAsNonRoot: true as an extra guard); this ensures admission controllers that require numeric USERs and kube-score recommendations (UID > 10000) are satisfied while retaining the existing allowPrivilegeEscalation/readOnlyRootFilesystem/capabilities settings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@k8s/api/deployment.yaml`:
- Around line 220-233: Update the image and manifest to enforce a fixed non-root
UID: modify the Dockerfile to create the app user with a deterministic high UID
(e.g., use adduser --uid 10001 app) and then set securityContext.runAsUser:
10001 in the Deployment (keep runAsNonRoot: true as an extra guard); this
ensures admission controllers that require numeric USERs and kube-score
recommendations (UID > 10000) are satisfied while retaining the existing
allowPrivilegeEscalation/readOnlyRootFilesystem/capabilities settings.
In `@tests/test_k8s_manifests.py`:
- Around line 45-56: The helper _by_kind_name currently asserts exactly one
match; change it to return None when no matches are found (so callers/tests can
make explicit assertions/messages) and keep a clear error for multiple matches
(raise a ValueError or AssertionError with the existing message like f"Expected
exactly 1 {kind}/{name}, got {len(matches)}"). Update the function _by_kind_name
to check len(matches) and return matches[0] for one match, return None for zero,
and raise for >1 so callers can handle the None case and write descriptive
assertions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b0a5abcb-4d4b-4e89-b727-4ea4284b9a73
📒 Files selected for processing (8)
k8s/README.mdk8s/api/deployment.yamlk8s/api/hpa.yamlk8s/deriver/deployment.yamlk8s/postgres/statefulset.yamlk8s/redis/statefulset.yamlk8s/secrets.yaml.exampletests/test_k8s_manifests.py
✅ Files skipped from review due to trivial changes (2)
- k8s/api/hpa.yaml
- k8s/secrets.yaml.example
🚧 Files skipped from review as they are similar to previous changes (1)
- k8s/deriver/deployment.yaml
Summary
/k8sdirectory with Kustomize-based manifests for deploying all Honcho services on Kubernetestests/test_k8s_manifests.py— a self-contained pytest suite that validates the manifests without a running clusterMotivation
Honcho ships a
docker-compose.yml.examplebut has no supported path for teams already running Kubernetes infrastructure. This is the gap tracked in #514.What's included
Services
postgres— StatefulSet (pgvector/pgvector:pg15, 10 Gi PVC) + headless Service; differentiated readiness (exec pg_isready) and liveness (tcpSocket) probesredis— StatefulSet (redis:8.2, 2 Gi PVC) + headless Service; differentiated probeshoncho-api— Deployment (no static replicas — HPA-managed), ClusterIP Service, HPA (1–5 replicas at 70% CPU), PDB (maxUnavailable: 1);httpGetreadiness +tcpSocketliveness to avoid the identical-probe anti-patternhoncho-deriver— Deployment with no HTTP probe — the deriver is a queue worker, not an HTTP server; health is managed by the Deployment restart policySecurity
runAsNonRoot,readOnlyRootFilesystem,allowPrivilegeEscalation: false,capabilities.drop: ALL,seccompProfile: RuntimeDefault,automountServiceAccountToken: falseDB_CONNECTION_URI,AUTH_JWT_SECRET, LLM keys) in a gitignoredSecret; non-sensitive config in aConfigMapDocumentation
k8s/README.md— prerequisites, per-distro image loading (kind, k3s, Docker Desktop), secrets setup, deploy, verify, useful commands, autoscaling, networking, and production notes (managed DB/cache, external secrets, Ingress, TLS, observability)Test plan
Manifest unit tests (no cluster required — requires
kubectlon PATH):pytest tests/test_k8s_manifests.py --noconftest -p no:xdist --override-ini="addopts=" -v31 tests: namespace isolation, NetworkPolicy exact source/port assertions, headless Services, HPA contract, PDB semantics, deriver probe absence, API probe differentiation, security contexts, init container hardening, secret/ConfigMap split, and resource limits.
Schema validation (requires
kubeconform):kubectl kustomize k8s/ | kubeconform -strict -summarySecurity scoring (requires
kube-score):kubectl kustomize k8s/ | kube-score score -End-to-end (requires a local k8s cluster):
docker build -t honcho:latest .cp k8s/secrets.yaml.example k8s/secrets.yamland fill in valueskubectl apply -f k8s/namespace.yaml && kubectl apply -f k8s/secrets.yaml && kubectl apply -k k8s/kubectl get pods -n honcho --watch— all pods reachRunningkubectl port-forward svc/honcho-api 8000:80 -n honcho+curl http://localhost:8000/openapi.jsonkubectl logs deploy/honcho-deriver -n honcho— queue worker starts, no health probe errorsReferences #514
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
Chores