This project allows to automatically generate an accurate and up-to-date communication
flows matrix that can be delivered to customers as part of product documentation for all
ingress flows of OpenShift (multi-node and single-node deployments).
This library leverages the EndpointSlice resource to identify the ports the
cluster uses for ingress traffic. Relevant EndpointSlices include those
referencing host-networked pods, Node Port services, and LoadBalancer services.
The ss command, a Linux utility, lists open ports on
the host with ss -anplt for TCP or ss -anplu for UDP.
For example, consider the following ss entry:
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=6187,fd=20))
The ss package provides the CreateSSOutputFromNode function that runs
the ss command on each node, and converts the output into a corresponding ComDetails list.
Use the generate Makefile target to create the matrix.
Add additional entires to the matrix via a custom file, using
the variables CUSTOM_ENTRIES_PATH and CUSTOM_ENTRIES_FORMAT.
Examples are available in the example-custom-entries files.
The following environment variables are used to configure:
FORMAT (csv/json/yaml/nft/butane/mc)
DEST_DIR (path to the directory containing the artifacts)
CUSTOM_ENTRIES_PATH (path to the file containing custom entries to add to the matrix)
CUSTOM_ENTRIES_FORMAT (the format of the custom entries file (json,yaml,csv))
In some clusters, a subset of worker nodes may run additional services (e.g., ingress controllers, storage agents) that require separate firewall configurations. By default, all nodes in the same MachineConfigPool share a single set of firewall rules. Custom node groups let you split selected nodes into a separate group so they get their own Butane/MachineConfig CR with the correct ports.
Nodes are selected using standard Kubernetes label selectors, consistent with how MachineConfigPools select nodes. The flag format is groupName=labelSelector, and it is repeatable for multiple groups.
Custom node groups affect all output formats: in CSV/JSON/YAML the nodeGroup field reflects the custom group name; in NFT/Butane/MC a separate file is generated per group. Static entries (SSH, kubelet, node-exporter, etc.) are automatically included in the custom group based on the original node role.
Each node can only match the selector of one custom group. If a node's labels match selectors from multiple groups, an error is returned. A selector that matches no nodes also returns an error, to catch typos early. When the flag is omitted, behavior is unchanged.
Important (Butane/MC formats): The generated Butane/MachineConfig CRs for custom groups can only be applied if the nodes are already placed in a matching MachineConfigPool. You must create the custom MCP first, then apply the generated CR. For NFT/CSV/JSON/YAML formats, the output can be used directly without this prerequisite.
For CLI usage examples, see the oc commatrix plugin documentation.
The generated artifcats are:
communication-matrix - The generated communication matrix.
ss-generated-matrix - The communication matrix that generated by the `ss` command.
matrix-diff-ss - Shows the variance between two matrices. Entries present in the communication matrix but absent in the ss matrix are marked with '+', while entries present in the ss matrix but not in the communication matrix are marked with '-'.
raw-ss-tcp - The raw `ss` output for TCP.
raw-ss-udp - The raw `ss` output for UDP.
Note: The ss-generated-matrix, matrix-diff-ss, raw-ss-tcp, and raw-ss-udp artifacts are only generated for CSV, JSON, and YAML formats. For NFT, Butane, and MachineConfig formats, the ss results are merged into the communication matrix.
Each record describes a flow with the following information:
direction Data flow direction (currently ingress only)
protocol IP protocol (TCP/UDP/SCTP/etc)
port Flow port number
namespace EndpointSlice Namespace
service EndpointSlice owner Service name
pod EndpointSlice target Pod name
container Port owner Container name
nodeGroup Resolved node group. Resolution logic:
- If --custom-node-group label selector matches the node: nodeGroup = that group name
- Else if MCP API available: nodeGroup = pool name parsed from node annotation
machineconfiguration.openshift.io/currentConfig (e.g., master, worker, custom-ws)
- Else if label hypershift.openshift.io/nodePool present: nodeGroup = that label value
- Else: nodeGroup = node role (e.g., master, worker)
optional Optional or mandatory flow for OpenShift
When associating a node to a MachineConfigPool (MCP), the pool is derived directly from the node annotation machineconfiguration.openshift.io/currentConfig, expected in the form rendered-<pool>-<hash>. The pool name is obtained by removing the rendered- prefix and trimming the trailing -<hash>.
If MCPs are not present in the cluster (e.g., HyperShift), nodeGroup is computed as:
- use
hypershift.openshift.io/nodePoollabel when available. - otherwise fall back to the node role.
The resolved group is recorded in the nodeGroup field for CSV/JSON/YAML outputs. NFT, Butane, and MachineConfig outputs are generated per node pool (MCP) or node role accordingly. The Butane and MachineConfig formats also produce a node-disruption-policy.yaml patch file to avoid full node reboots when nftables rules are updated.