Add configurable labels to ServiceMonitor for Prometheus discovery (#97)#160
Open
hocmemini wants to merge 1 commit into
Open
Add configurable labels to ServiceMonitor for Prometheus discovery (#97)#160hocmemini wants to merge 1 commit into
hocmemini wants to merge 1 commit into
Conversation
…ivekit#97) The chart's ServiceMonitor resource hardcoded its metadata.labels to the five chart-managed labels emitted by the livekit-server.labels helper. Prometheus operators that use a non-empty serviceMonitorSelector, notably kube-prometheus-stack which defaults to spec.serviceMonitorSelector.matchLabels.release=<release-name>, could not discover the rendered ServiceMonitor because the operator-required selector label was missing and there was no way to inject one through the chart's values. Add a serviceMonitor.labels field (an empty map by default) that is merged into metadata.labels alongside the chart-managed labels. The field is opt-in: the rendered ServiceMonitor is byte-identical to the prior chart for every operator who does not set it. With a value of { release: prometheus } the ServiceMonitor matches the kube-prometheus-stack default selector and is discovered. Implementation mirrors the existing serviceMonitor.annotations pattern: {{- with .Values.serviceMonitor.labels }}{{- toYaml . | nindent 4 }}{{- end }} appended after the chart-managed labels in the metadata block. Custom labels render after chart-managed labels, so a colliding key is shadowed by the user value (last-wins YAML map semantics, matching kube-prometheus-stack and Bitnami chart conventions). The egress and ingress charts ship no ServiceMonitor template and no serviceMonitor block in their values, so no parallel change is needed there. Add helm-unittest coverage in livekit-server/tests/ across two suites: servicemonitor_labels_test.yaml exercises the chart-default no-render case, default-labels enabled-render, single-label override, three-label override, the kube-prometheus-stack realistic config, and explicit empty override; upgrade_compatibility_test.yaml asserts byte-identical metadata.labels and unchanged spec for the default-labels case. 10 tests total, all passing under "helm unittest livekit-server/". Closes livekit#97
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Make ServiceMonitor labels configurable for Prometheus discovery
Closes #97.
Summary
The chart's
ServiceMonitorresource hardcoded itsmetadata.labelsto thefive chart-managed labels emitted by the
livekit-server.labelshelper.Prometheus operators that use a non-empty
serviceMonitorSelector(notablykube-prometheus-stack, which defaults tospec.serviceMonitorSelector.matchLabels.release: <release-name>) could notdiscover the ServiceMonitor because the operator-required selector label was
missing.
This PR adds a single
serviceMonitor.labelsfield (an empty map by default)that is merged into
metadata.labelsalongside the chart-managed labels. Thefield is opt-in: operators who do not set it see byte-identical output to the
prior chart.
Changes
livekit-server/values.yaml: newserviceMonitor.labels: {}field withcomment documenting the kube-prometheus-stack
release: prometheusexample and pointing operators at their Prometheus
serviceMonitorSelectorconfiguration.livekit-server/templates/servicemonitor.yaml: added a{{- with .Values.serviceMonitor.labels }}{{- toYaml . | nindent 4 }}{{- end }}block immediately after the chart-managed labels, mirroring the existing
annotationspattern lower in the same file.livekit-server/tests/servicemonitor_labels_test.yaml(new): helm-unittestsuite covering five scenarios. Details in the Testing section below.
livekit-server/tests/upgrade_compatibility_test.yaml(new): helm-unittestsuite that asserts byte-identical default rendering.
Multi-chart consistency
Only
livekit-serverships a ServiceMonitor template(
livekit-server/templates/servicemonitor.yaml). Theegressandingresscharts have no ServiceMonitor, no
serviceMonitorblock in theirvalues.yaml, and no Prometheus-specific configuration at all (verified bygrep -rl 'ServiceMonitor\|serviceMonitor' egress/ ingress/returning nomatches). The fix is therefore applied only where it has a target.
Default behavior vs prior chart
serviceMonitor.create: false(chart default)serviceMonitor.create: trueandserviceMonitor.labels: {}(default)serviceMonitor.create: truewithlabels.release: prometheusrelease: prometheus(Prometheus selector matches)The diff between prior chart and this branch with default values is
empty, verified by
helm template ... | diffand asserted intests/upgrade_compatibility_test.yaml.Backwards compatibility
helm templatecomparison and locked in by a unit test.
serviceMonitor.labels: {...}set in their values (a no-op before this change because the template
ignored it) will see those labels start applying after upgrade. This is
"configuration that previously was silently ignored now does what the
user clearly intended," but operators in this position should review their
values.yaml to confirm the labels they wrote are still what they want
applied.
Testing
Tests are written for the helm-unittest plugin
(
helm plugin install https://github.com/helm-unittest/helm-unittest).Scenarios covered
tests/servicemonitor_labels_test.yaml(7 tests):renders at chart defaults (
create=false); A2 confirms chart-managedlabels are intact when ServiceMonitor is enabled; A3 deep-equals the
full
metadata.labelsmap against the five chart-managed entries toguarantee no accidental leakage from the new code path.
serviceMonitor.labels.release: prometheusadds the label and preservesall five chart-managed labels.
custom labels render alongside the five chart-managed labels with no
overwrites or losses; uses a fixture file
(
tests/fixtures/labels_three.yaml).asserts the rendered ServiceMonitor would be discovered by a Prometheus
instance whose
serviceMonitorSelectormatchesrelease: prometheus,and that the spec block (apiVersion, endpoints, selector) is valid for
scraping.
serviceMonitor.labels: {}produces output identical to the defaultcase (no malformed/empty labels block); uses a fixture file
(
tests/fixtures/labels_empty.yaml).tests/upgrade_compatibility_test.yaml(3 tests): asserts no ServiceMonitorrenders at chart defaults, asserts deep-equality on
metadata.labelsforthe enabled-with-default-labels case (catches any leakage), and asserts the
spec block is unchanged. The suite header documents the manual reproduction
recipe and the captured diff (empty).
Running the tests
Test names are scenario-prefixed (
A1.,A2.,A3.,B1.,C1.,D1.,E1.) so a failure line points at the exact scenario.Coverage gaps (out of scope for this PR)
No live-cluster install test of label discovery. The repo's existing
ct installmatrix runshelm installon microk8s but does not stand upa Prometheus instance and verify scrape target discovery. The unit test
asserts the rendered ServiceMonitor would be selected by a
release: prometheusselector, but does not deploy Prometheus to confirm.No CI step yet runs
helm unittestin this repo. The current CIruns
ct lintandct installonly. To wire these tests into CI, amaintainer would add:
after the existing "Set up Helm" step (CI uses Helm v3.11.2; Helm v4
users locally may need
--verify=falseon the plugin install). I haveintentionally not modified the workflow file in this PR to keep the
chart-change diff focused; happy to add the CI step in this PR or a
follow-up if maintainers prefer.
No test for label-key collisions between user-supplied labels and
chart-managed labels. With user labels rendered after chart-managed
labels, a user setting (for example)
serviceMonitor.labels.app.kubernetes.io/namewould shadow the chart-managed value. This matches the convention used
by widely-deployed charts but is documented here rather than tested.
No annotations changes. Issue Adding label to discover service monitor for prometheus. #97 is scoped to labels;
serviceMonitor.annotationsalready exists and is unchanged.Local verification
Path Considered But Not Taken
This PR adds a single
labelsfield to the existingserviceMonitorblock.An alternative approach considered was a more comprehensive metadata block
(annotations, labels, name overrides) which would address potential future
configuration needs preemptively. That approach was not taken because:
metadata configuration.
API surface unnecessarily.
in follow-up PRs with the same pattern. The existing
serviceMonitor.annotationsfield is already in place; further metadatafields would slot in alongside without a structural change.
The current scope is the minimum change that resolves #97 cleanly.