A quickstart template for building Platform Mesh providers. This repo demonstrates how to create a provider that exposes APIs through kcp and integrates with the Platform Mesh UI.
This is an example "Wild West" provider that exposes a Cowboys API (wildwest.platform-mesh.io). It shows how to:
- Define and export APIs via kcp - Using
APIExportandAPIResourceSchemaresources - Register as a Platform Mesh provider - Using
ProviderMetadatato describe your provider - Configure UI integration - Using
ContentConfigurationto add navigation and views
Registers your provider with Platform Mesh, including display name, description, contacts, and icons:
apiVersion: ui.platform-mesh.io/v1alpha1
kind: ProviderMetadata
metadata:
name: wildwest.platform-mesh.io # Must match your APIExport name
spec:
displayName: Wild West Provider
description: ...
contacts: [...]
icon: {...}Configures how your resources appear in the Platform Mesh UI. The key label links it to your APIExport:
apiVersion: ui.platform-mesh.io/v1alpha1
kind: ContentConfiguration
metadata:
labels:
ui.platform-mesh.io/content-for: wildwest.platform-mesh.io # Links to your APIExport
name: cowboys-ui
spec:
inlineConfiguration:
content: |
{ "luigiConfigFragment": { ... } }The ui.platform-mesh.io/content-for label is critical - it associates your UI configuration with your APIExport.
├── cmd/
│ ├── init/ # Bootstrap CLI tool
│ ├── wild-west/ # Provider operator (consumer-workspace controller via APIExport)
│ └── armament-sync/ # Catalog syncer (provider-workspace controller, ticker-driven)
├── config/
│ ├── crds/ # CRDs (armaments CRD also installed in the provider workspace)
│ ├── kcp/ # kcp resources (APIExport, APIResourceSchema, CachedResource)
│ └── provider/ # Provider resources (ProviderMetadata, ContentConfiguration, RBAC)
├── operator/
│ ├── wild-west/ # Cowboy reconciler
│ └── armament-sync/ # Armament catalog reconciler
├── pkg/
│ ├── bootstrap/ # Bootstrap logic for applying resources
│ └── external/ # External-source client interface (+ static dev client)
└── portal/ # Custom UI microfrontend example (Angular + Luigi)
Important: Providers must live in a dedicated workspace type within a separate tree. This means platform administrators must configure providers using the admin kubeconfig. Regular user kubeconfigs will not have the necessary permissions to create provider workspaces. This is bound to change and improve in the future, but for now you must use the admin kubeconfig to set up your provider.
You need the admin kubeconfig to create and manage provider workspaces:
cp ../helm-charts/.secret/kcp/admin.kubeconfig kcp-admin.kubeconfig
export PM_KUBECONFIG="$(realpath kcp-admin.kubeconfig)"
kind export kubeconfig --name platform-mesh --kubeconfig compute.kubeconfig
export COMPUTE_KUBECONFIG="$(realpath compute.kubeconfig)"Navigate to the root workspace and create the provider workspace structure:
# Navigate to root workspace and create provider workspace hierarchy
KUBECONFIG=$PM_KUBECONFIG kubectl ws use :
KUBECONFIG=$PM_KUBECONFIG kubectl ws create providers --type=root:providers --enter --ignore-existing
KUBECONFIG=$PM_KUBECONFIG kubectl ws create quickstart --type=root:provider --enter --ignore-existingBuild and run the bootstrap to register your provider:
KUBECONFIG=$PM_KUBECONFIG make init HOST_OVERRIDE=https://frontproxy-front-proxy.platform-mesh-system:8443This applies all kcp and provider resources to register your provider and created dedicated ServiceAccount and RBAC for the provider workspace.
Once this is done, you should be able to access your provider's APIs through the kcp API and see it registered in the Platform Mesh UI.
Extract the kubeconfig for your provider workspace:
KUBECONFIG=$PM_KUBECONFIG kubectl get secret wildwest-controller-kubeconfig -n default -o jsonpath='{.data.kubeconfig}' | base64 -d > operator.kubeconfigFor local development, run the operator directly:
KUBECONFIG=./operator.kubeconfig go run ./cmd/wild-west --endpointslice=wildwest.platform-mesh.ioBuild container images and load them into the kind cluster:
export IMAGE_TAG=platform-mesh
make images kind-load-all IMAGE_TAG=$IMAGE_TAGCreate the namespace and the kubeconfig secret for the operator:
KUBECONFIG=$COMPUTE_KUBECONFIG kubectl create namespace provider-cowboys
KUBECONFIG=$COMPUTE_KUBECONFIG kubectl delete secret wildwest-controller-kubeconfig -n provider-cowboys --ignore-not-found
KUBECONFIG=$COMPUTE_KUBECONFIG kubectl create secret generic wildwest-controller-kubeconfig \
--from-file=kubeconfig=./operator.kubeconfig -n provider-cowboysDeploy the controller:
KUBECONFIG=$COMPUTE_KUBECONFIG helm upgrade --install wildwest-controller ./deploy/helm/wildwest-controller \
--namespace provider-cowboys \
--set image.tag=$IMAGE_TAG \
--set image.pullPolicy=IfNotPresent \
--set common.defaults.hostAliases.enabled=trueDeploy the armament-sync controller (runs in the provider workspace and syncs the catalog from an external source — currently a static hardcoded list — into Armament CRs that are then exposed read-only to consumer workspaces via a CachedResource). It ships as its own image (provider-quickstart-armament-sync), built and loaded by make images kind-load-all:
KUBECONFIG=$COMPUTE_KUBECONFIG helm upgrade --install wildwest-armament-sync ./deploy/helm/wildwest-armament-sync \
--namespace provider-cowboys \
--set image.tag=$IMAGE_TAG \
--set image.pullPolicy=IfNotPresent \
--set common.defaults.hostAliases.enabled=truekui Deploy the portal microfrontend:
KUBECONFIG=$COMPUTE_KUBECONFIG helm upgrade --install wildwest-portal ./deploy/helm/wildwest-portal \
--namespace provider-cowboys \
--set image.tag=$IMAGE_TAG \
--set image.pullPolicy=IfNotPresent \
--set httpRoute.enabled=true \
--set middleware.enabled=true \
--set common.defaults.hostAliases.enabled=trueTo upgrade after rebuilding images:
make images kind-load-all IMAGE_TAG=$IMAGE_TAG
KUBECONFIG=$COMPUTE_KUBECONFIG kubectl rollout restart deployment -n provider-cowboysThe Cowboy CRD has an optional spec.secretRefs[] field that lists Secrets the cowboy depends on. The portal microfrontend renders one chip per reference and calls the GraphQL gateway to check whether each Secret actually exists:
- Green chip — Secret resolved (
v1.Secret(name, namespace)returned metadata). - Red chip — Secret missing (NotFound) or inaccessible (RBAC forbidden, network error).
- Neutral chip — existence check is in flight (transient on first paint).
Cowboys without secretRefs show no chips row at all, so the existing tiles are unchanged.
Note: the snippet below targets a consumer workspace that has the
wildwest.platform-mesh.ioAPIExport bound — it is not the provider workspace from the bootstrap steps above. Today the only supported way to provision and switch into such a workspace is via the Platform Mesh CLI (pm); plainkubectl/kubectl wsagainst the provider workspace will not work because theCowboyAPI is not served there. Usepmto create/select your consumer workspace first, then export its kubeconfig asKUBECONFIGand run:
NAMESPACE=default # change to whatever namespace you're testing in
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: colt-45-permit
namespace: ${NAMESPACE}
type: Opaque
stringData:
serial_number: C45-123456
permit_date: "1881-04-15"
issued_by: Tombstone Marshal
---
apiVersion: wildwest.platform-mesh.io/v1alpha1
kind: Cowboy
metadata:
name: billy-the-kid
namespace: ${NAMESPACE}
spec:
intent: Ride the range and protect the cattle
secretRefs:
- name: colt-45-permit # exists -> green chip
- name: missing-saddlebag # does NOT exist -> red chip
---
apiVersion: wildwest.platform-mesh.io/v1alpha1
kind: Cowboy
metadata:
name: lonely-ranger
namespace: ${NAMESPACE}
spec:
intent: Ride alone
# no secretRefs -> Secret Refs row is hidden in the UI
EOFOpen the Cowboys page in the Portal and refresh. The billy-the-kid tile shows one green chip (colt-45-permit) and one red chip (missing-saddlebag); lonely-ranger shows no Secret Refs row.
Clean up:
kubectl delete -n "$NAMESPACE" cowboy billy-the-kid lonely-ranger
kubectl delete -n "$NAMESPACE" secret colt-45-permitArmament is a cluster-scoped catalog type populated by the armament-sync controller from an external source (currently a static hardcoded list in pkg/external/static). The catalog lives in the provider workspace and is replicated to consumers read-only via a kcp CachedResource bound to the wildwest.platform-mesh.io APIExport.
Architecture:
External source ──poll(ticker)──> armament-sync ──writes──> Armament CRs (provider workspace)
│
CachedResource
│
▼
consumer workspaces (read-only)
│
▼
Cowboy.spec.armamentRef → name lookup
Two binaries, deployed independently:
| Binary | Workspace | Role |
|---|---|---|
wild-west |
consumer (via APIExport endpoint slice) | Reconciles Cowboy objects users create |
armament-sync |
provider (direct kubeconfig) | Pulls the external catalog on a timer, upserts/deletes Armament CRs |
Verify in the provider workspace that armaments appear after the syncer's first tick:
KUBECONFIG=./operator.kubeconfig kubectl get armaments
# NAME KIND DAMAGE RANGE
# bowie-knife blade 30 2
# colt-saa revolver 50 50
# lasso rope 5 10
# winchester-1873 rifle 80 400In a consumer workspace (one that has bound the wildwest.platform-mesh.io APIExport), the same list is visible read-only and can be referenced from a Cowboy:
kubectl apply -f - <<EOF
apiVersion: wildwest.platform-mesh.io/v1alpha1
kind: Cowboy
metadata:
name: armed-pete
namespace: ${NAMESPACE:-default}
spec:
intent: Patrol the canyon
armamentRef:
name: winchester-1873
EOFAttempting to kubectl edit armament from the consumer workspace will fail — the cached resource is read-only. To change the catalog, modify the external source (today: edit pkg/external/static/client.go and rebuild) or swap the static client for a real backend implementing external.Client.
Assuming your provider workspace is quickstart under the providers tree:
View your provider's marketplace entry (combines APIExport + ProviderMetadata):
kubectl --server="https://localhost:8443/services/marketplace/clusters/root:providers:quickstart" get marketplaceentries -A
kubectl --server="https://localhost:8443/services/marketplace/clusters/root:providers:quickstart" get marketplaceentries -A -o yamlView available API resources and content configurations:
kubectl --server="https://localhost:8443/services/contentconfigurations/clusters/root:providers:quickstart" api-resources
kubectl --server="https://localhost:8443/services/contentconfigurations/clusters/root:providers:quickstart" get contentconfigurations -A
kubectl --server="https://localhost:8443/services/contentconfigurations/clusters/root:providers:quickstart" get contentconfigurations -A -o yamlThe server URL follows this pattern:
https://<host>/services/<virtual-workspace>/clusters/root:providers:<provider-workspace>
Where:
marketplace- Virtual workspace for marketplace entriescontentconfigurations- Virtual workspace for UI content configurations<provider-workspace>- Your provider workspace name (e.g.,quickstart)
This project uses two key code generation tools:
controller-gen is a Kubernetes code generator that:
- Generates CRD manifests from Go type definitions with
+kubebuildermarkers - Generates DeepCopy methods (
zz_generated.deepcopy.go) required for all Kubernetes API types - Part of the controller-tools project from Kubernetes SIGs
apigen is a kcp-specific tool that:
- Converts standard Kubernetes CRDs into APIResourceSchemas for kcp
- APIResourceSchemas are kcp's way of defining API types that can be exported via
APIExport - Takes CRDs from
config/crds/and outputs APIResourceSchemas toconfig/kcp/
Generation flow:
Go types (apis/) → controller-gen → CRDs (config/crds/) → apigen → APIResourceSchemas (config/kcp/)
| Target | Description |
|---|---|
make build |
Build all binaries (operator + init + armament-sync) |
make build-operator |
Build the wild-west operator binary |
make build-init |
Build the init/bootstrap binary |
make build-armament-sync |
Build the armament-sync controller binary |
make run |
Run the wild-west operator locally |
make run-armament-sync |
Run the armament-sync controller locally |
make init |
Bootstrap provider resources into workspace (requires KUBECONFIG, optional HOST_OVERRIDE) |
make generate |
Generate code (deepcopy) and kcp resources |
make manifests |
Generate CRD manifests from Go types |
make apiresourceschemas |
Generate APIResourceSchemas from CRDs |
make image-build |
Build controller container image |
make portal-image-build |
Build portal container image |
make armament-sync-image-build |
Build armament-sync container image |
make images |
Build all container images (controller + portal + armament-sync) |
make kind-load |
Load controller image into kind cluster |
make kind-load-portal |
Load portal image into kind cluster |
make kind-load-armament-sync |
Load armament-sync image into kind cluster |
make kind-load-all |
Load all images into kind cluster |
make tools |
Install all required tools (controller-gen, apigen) |
make fmt |
Run go fmt |
make vet |
Run go vet |
make tidy |
Run go mod tidy |
make help |
Display help for all targets |
- Fork this repo
- Update the API group name (replace
wildwest.platform-mesh.io) - Define your CRD schema in
config/kcp/ - Update
ProviderMetadatawith your provider details - Configure
ContentConfigurationfor your resource UI - Update RBAC to allow binding to your APIExport
Platform Mesh supports custom UIs for providers via microfrontends. This is useful when:
- Table views aren't sufficient for your resource representation
- You need custom wizards or multi-step flows (e.g., VM creation with SSH keys)
- You want to orchestrate multiple resources in a single view
- You need custom visualizations beyond standard lists
┌─────────────────────────────────────────────────────────────┐
│ Platform Mesh Portal │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Luigi Shell │ │
│ │ │ │
│ │ Your MFE receives from Luigi context: │ │
│ │ - token (Bearer auth for API calls) │ │
│ │ - portalContext.crdGatewayApiUrl (GraphQL API) │ │
│ │ - accountId (current account context) │ │
│ │ │ │
│ │ Your MFE can then: │ │
│ │ - Query/mutate K8s resources via GraphQL │ │
│ │ - Use Luigi UX manager for alerts/dialogs │ │
│ │ - Navigate within the portal │ │
│ │ │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
-
Generate a new microfrontend:
git clone https://github.com/openmfp/create-micro-frontend cd create-micro-frontend npm install && npm run build npx create-micro-frontend portal -y
-
Run locally:
cd portal npm install npm startThis serves your MFE at
http://localhost:4200 -
Enable local development mode in the Portal to load your local MFE
-
Deploy by hosting the built files and creating a
ContentConfigurationresource
This repo includes a complete working example in the portal/ directory showing:
- Luigi context integration for auth and API access
- GraphQL queries/mutations for Kubernetes resources
- SAP UI5 web components for consistent Portal styling
- Local development proxy configuration
See portal/README.md for detailed documentation.
1. Luigi Context (auth & API URLs):
import { LuigiContextService } from '@luigi-project/client-support-angular';
import LuigiClient from '@luigi-project/client';
// Wait for Luigi handshake before making API calls
LuigiClient.addInitListener(() => {
const context = luigiContextService.getContext();
const token = context.token; // Bearer token
const apiUrl = context.portalContext.crdGatewayApiUrl; // GraphQL endpoint
});2. GraphQL API for K8s resources:
query ListMyResources {
my_api_group_io {
v1alpha1 {
MyResources {
items { metadata { name } spec { ... } }
}
}
}
}3. ContentConfiguration (register your MFE):
apiVersion: ui.platform-mesh.io/v1alpha1
kind: ContentConfiguration
metadata:
labels:
ui.platform-mesh.io/content-for: my-api.platform-mesh.io # Links to your APIExport
name: my-ui
spec:
inlineConfiguration:
contentType: json
content: |
{
"name": "my-ui",
"luigiConfigFragment": {
"data": {
"nodes": [{
"pathSegment": "my-resources",
"label": "My Resources",
"entityType": "main.core_platform-mesh_io_account:1",
"url": "https://your-mfe-host/index.html"
}]
}
}
}Group your MFE under a category in the sidebar:
{
"category": { "label": "Providers", "icon": "customize", "collapsible": true },
"pathSegment": "my-resources",
"label": "My Resources",
...
}See Luigi navigation docs for more options.
As a provider, you are responsible for hosting your microfrontend (similar to running your operator). The MFE needs to be accessible to Portal users. Options include:
- Static hosting (S3, GCS, GitHub Pages, etc.)
- Container deployment alongside your operator
- Any web server that can serve static files
