This Agents.md file provides comprehensive guidance for AI assistants and coding agents (like Claude, Gemini, Cursor, and others) to work with this codebase.
This repository contains the kubernetes-mcp-server project, a powerful Go-based Model Context Protocol (MCP) server that provides native Kubernetes and OpenShift cluster management capabilities without external dependencies. This MCP server enables AI assistants (like Claude, Gemini, Cursor, and others) to interact with Kubernetes clusters using the Model Context Protocol (MCP).
- Go package layout follows the standard Go conventions:
cmd/kubernetes-mcp-server/– main application entry point using Cobra CLI framework.pkg/– libraries grouped by domain.api/- API-related functionality, tool definitions, and toolset interfaces.config/– configuration management.helm/- Helm chart operations integration.http/- HTTP server and authorization middleware.kubernetes/- Kubernetes client management, authentication, and access control.mcp/- Model Context Protocol (MCP) server implementation with tool registration and STDIO/HTTP support.output/- output formatting and rendering.toolsets/- Toolset registration and management for MCP tools.version/- Version information management.
.github/– GitHub-related configuration (Actions workflows, issue templates...).docs/– documentation files (see Documentation section below).npm/– Node packages that wraps the compiled binaries for distribution through npmjs.com.python/– Python package providing a script that downloads the correct platform binary from the GitHub releases page and runs it for distribution through pypi.org.Dockerfile- container image description file to distribute the server as a container image.Makefile– tasks for building, formatting, linting and testing.
Implement new functionality in the Go sources under cmd/ and pkg/.
The JavaScript (npm/) and Python (python/) directories only wrap the compiled binary for distribution (npm and PyPI).
Most changes will not require touching them unless the version or packaging needs to be updated.
The project uses a toolset-based architecture for organizing MCP tools:
- Tool definitions are created in
pkg/api/using theServerToolstruct. - Toolsets group related tools together (e.g., config tools, core Kubernetes tools, Helm tools).
- Registration happens in
pkg/toolsets/where toolsets are registered at initialization. - Each toolset lives in its own subdirectory under
pkg/toolsets/(e.g.,pkg/toolsets/config/,pkg/toolsets/core/,pkg/toolsets/helm/).
When adding a new tool:
- Define the tool handler function that implements the tool's logic.
- Create a
ServerToolstruct with the tool definition and handler. - Add the tool to an appropriate toolset (or create a new toolset if needed).
- Register the toolset in
pkg/toolsets/if it's a new toolset.
Use the provided Makefile targets:
# Format source and build the binary
make build
# Build for all supported platforms
make build-all-platformsmake build will run go fmt and go mod tidy before compiling.
The resulting executable is kubernetes-mcp-server.
The README demonstrates running the server via
mcp-inspector:
make build
npx @modelcontextprotocol/inspector@latest $(pwd)/kubernetes-mcp-serverTo run the server locally, you can use npx, uvx or execute the binary directly:
# Using npx (Node.js package runner)
npx -y kubernetes-mcp-server@latest
# Using uvx (Python package runner)
uvx kubernetes-mcp-server@latest
# Binary execution
./kubernetes-mcp-serverThis MCP server is designed to run both locally and remotely.
When running locally, the server connects to a Kubernetes or OpenShift cluster using the kubeconfig file.
It reads the kubeconfig from the --kubeconfig flag, the KUBECONFIG environment variable, or defaults to ~/.kube/config.
This means that npx -y kubernetes-mcp-server@latest on a workstation will talk to whatever cluster your current kubeconfig points to (e.g. a local Kind cluster).
When running remotely, the server can be deployed as a container image in a Kubernetes or OpenShift cluster. The server can be run as a Deployment, StatefulSet, or any other Kubernetes resource that suits your needs. The server will automatically use the in-cluster configuration to connect to the Kubernetes API server.
Run all Go tests with:
make testThe test suite relies on the setup-envtest tooling from sigs.k8s.io/controller-runtime.
The first run downloads a Kubernetes envtest environment from the internet, so network access is required.
Without it some tests will fail during setup.
This project follows specific testing patterns to ensure consistency, maintainability, and quality. When writing tests, adhere to the following guidelines:
- Use
testify/suitefor organizing tests into suites - Tests should be structured using test suites that embed
suite.Suite - Each test file should have a corresponding suite struct (e.g.,
UnstructuredSuite,KubevirtSuite) - Use the
suite.Run()function to execute test suites
Example:
type MyTestSuite struct {
suite.Suite
}
func (s *MyTestSuite) TestSomething() {
s.Run("descriptive scenario name", func() {
// test implementation
})
}
func TestMyFeature(t *testing.T) {
suite.Run(t, new(MyTestSuite))
}- Test the public API only - tests should be black-box and not access internal/private functions
- No mocks - use real implementations and integration testing where possible
- Behavior over implementation - test what the code does, not how it does it
- Focus on observable behavior and outcomes rather than internal state
- Use nested subtests with
s.Run()to organize related test cases - Descriptive names - subtest names should clearly describe the scenario being tested
- Group related scenarios together under a parent test (e.g., "edge cases", "with valid input")
Example structure:
func (s *MySuite) TestFeature() {
s.Run("valid input scenarios", func() {
s.Run("handles simple case correctly", func() {
// test code
})
s.Run("handles complex case with nested data", func() {
// test code
})
})
s.Run("edge cases", func() {
s.Run("returns error for nil input", func() {
// test code
})
s.Run("handles empty input gracefully", func() {
// test code
})
})
}- One assertion per test case - each
s.Run()block should ideally test one specific behavior - Use
testifyassertion methods:s.Equal(),s.True(),s.False(),s.Nil(),s.NotNil(), etc. - Provide clear assertion messages when the failure reason might not be obvious
Example:
s.Run("returns expected value", func() {
result := functionUnderTest()
s.Equal("expected", result, "function should return the expected string")
})- Aim for high test coverage of the public API
- Add edge case tests to cover error paths and boundary conditions
- Common edge cases to consider:
- Nil/null inputs
- Empty strings, slices, maps
- Negative numbers or invalid indices
- Type mismatches
- Malformed input (e.g., invalid paths, formats)
- Never ignore errors in production code
- Always check and handle errors from functions that return them
- In tests, use
s.Require().NoError(err)for operations that must succeed for the test to be valid - Use
s.Error(err)ors.NoError(err)for testing error conditions
Example:
s.Run("returns error for invalid input", func() {
result, err := functionUnderTest(invalidInput)
s.Error(err, "expected error for invalid input")
s.Nil(result, "result should be nil when error occurs")
})- Create reusable test helpers in
internal/test/for common testing utilities - Test helpers should be generic and reusable across multiple test files
- Document test helpers with clear godoc comments explaining their purpose and usage
Example from this project:
// FieldString retrieves a string field from an unstructured object using JSONPath-like notation.
// Examples:
// - "spec.runStrategy"
// - "spec.template.spec.volumes[0].containerDisk.image"
func FieldString(obj *unstructured.Unstructured, path string) string {
// implementation
}Downstream builds may override defaults at two layers, and tests must work regardless of which overrides are active:
- Static config overrides —
pkg/config/config_default_overrides.goexposes a stubdefaultOverrides()that downstream forks populate to change defaults such asReadOnly,Toolsets, orToolOverrides. - Per-toolset overrides — toolsets like
kialiandkubevirtcarry aninternal/defaults/defaults_override.gostub that downstream uses to rebrand the toolset name and description (e.g.kiali→ossm).
The rules below keep the test suite portable across both layers.
BaseMcpSuite.SetupTest() initializes s.Cfg from config.BaseDefault()
(pure upstream defaults), then layers test-specific tweaks like
ListOutput = "yaml" on top. Any custom suite (ToolsetsSuite, etc.) must
do the same — using config.Default() would let downstream overrides leak
into the test environment.
The dominant pattern across the test suite is to unmarshal TOML directly
into the existing s.Cfg. This keeps the configuration visible inline
(matching how a real config file looks) and automatically preserves the
runtime fields the suite already set (KubeConfig, ListOutput,
ReadOnly, etc.):
s.Require().NoError(toml.Unmarshal([]byte(`
toolsets = [ "kubevirt" ]
`), s.Cfg), "Expected to parse toolsets config")See pkg/mcp/kubevirt_test.go, pkg/mcp/pods_exec_test.go, and
pkg/mcp/helm_test.go for representative examples.
toml.Unmarshal only does a single decode pass. Fields like
toolset_configs and cluster_provider_configs are typed as
map[string]toml.Primitive and need a second parse phase that uses the
TOML metadata returned by config.ReadToml. If your TOML uses one of those
sections, toml.Unmarshal will leave them unparsed and GetToolsetConfig
/ GetProviderConfig will return nothing.
In that case, use ReadToml and restore the runtime fields explicitly:
kubeConfig := s.Cfg.KubeConfig
listOutput := s.Cfg.ListOutput
readOnly := s.Cfg.ReadOnly
cfg, err := config.ReadToml([]byte(tomlStr))
s.Require().NoError(err)
s.Cfg = cfg
s.Cfg.KubeConfig = kubeConfig
s.Cfg.ListOutput = listOutput
s.Cfg.ReadOnly = readOnlypkg/mcp/kiali_test.go and the TestToolHandlerReceivesToolsetConfig
case in pkg/mcp/mcp_config_provider_test.go follow this pattern.
Note that the TOML key under [toolset_configs.<key>] is the literal
name the toolset registers its parser under, not the toolset's exposed
name. The Kiali toolset registers its parser as "kiali"
(see pkg/kiali/config.go), so even when downstream rebrands the
toolset to ossm, the section is still [toolset_configs.kiali]
and params.GetToolsetConfig("kiali") is the correct lookup. Resolve
the toolset name dynamically via GetName() (see below), but keep
the toolset_configs key hardcoded — using the dynamic name there will
silently produce (nil, false).
Reload tests sometimes need a separate *StaticConfig derived from
config.Default(). This is the deliberate exception to the "start from
BaseDefault()" rule above: a reload simulates the SIGHUP path the
running server takes, which goes through Default() and therefore sees
any downstream overrides. Carry across the runtime fields from s.Cfg
so the candidate config still points at the test kubeconfig and respects
the suite's ReadOnly value (see pkg/mcp/mcp_reload_test.go):
candidateStatic := config.Default()
candidateStatic.KubeConfig = s.Cfg.KubeConfig
candidateStatic.ReadOnly = s.Cfg.ReadOnlyHardcoding the toolset list assumes upstream defaults and breaks downstream builds that ship a different default set:
// Good - preserves whatever the suite already has
s.Cfg.Toolsets = append(s.Cfg.Toolsets, "helm")
// Bad - assumes upstream defaults
s.Cfg.Toolsets = []string{"core", "config", "helm"}When a test invokes a tool from a rebrandable toolset, ask the toolset for
its current name rather than hardcoding the upstream string. kiali_test.go
does this so the same test works whether the toolset is exposed as kiali
upstream or ossm downstream:
s.toolsetName = (&kialiToolset.Toolset{}).GetName()
// ...
s.CallTool(fmt.Sprintf("%s_get_trace_details", s.toolsetName), …)ReadOnly, Toolsets, ToolOverrides, ListOutput, and toolset names
are all subject to downstream override. Read them from s.Cfg or resolve
them at runtime — never assume the upstream value.
Good examples of these patterns can be found in:
internal/test/unstructured_test.go- demonstrates proper use of testify/suite, nested subtests, and edge case testingpkg/mcp/kubevirt_test.go- merges TOML intos.Cfgfor toolset selection; behavior-based MCP-layer testingpkg/kubernetes/manager_test.go- illustrates testing with proper setup/teardown and subtestspkg/mcp/pods_exec_test.go- inline TOML fordenied_resourcesconfigurationpkg/mcp/confirmation_test.go- inline TOML forconfirmation_rulesandconfirmation_fallbackpkg/mcp/helm_test.go- appends tos.Cfg.Toolsetsinstead of resetting itpkg/mcp/kiali_test.go- resolves the toolset name dynamically viaGetName()and usesReadTomlfortoolset_configspkg/mcp/toolsets_test.go- usesBaseDefault()inToolsetsSuite.SetupTest()to avoid downstream config leakingpkg/mcp/mcp_reload_test.go- inheritsReadOnlyfroms.Cfgwhen building candidate configs for reload testspkg/mcp/mcp_config_provider_test.go- demonstrates inheritingReadOnlyfrom suite config whentoolset_configsrequiresReadTomlpkg/mcp/require_tls_test.go- shows both patterns side by side:CoreRequireTLSSuitemerges plainrequire_tlsintos.Cfg, whileKialiRequireTLSSuitekeepsReadTomlfor[toolset_configs.kiali]
Static analysis is performed with golangci-lint:
make lintThe lint target downloads the specified golangci-lint version if it is not already present under _output/tools/bin/.
Beyond the basic build, test, and lint targets, the Makefile provides additional utilities:
Local Development:
# Setup a complete local development environment with Kind cluster
make local-env-setup
# Tear down the local Kind cluster
make local-env-teardown
# Show Keycloak status and connection info (for OIDC testing)
make keycloak-status
# Tail Keycloak logs
make keycloak-logsDistribution and Publishing:
# Copy compiled binaries to each npm package
make npm-copy-binaries
# Publish the npm packages
make npm-publish
# Publish the Python packages
make python-publish
# Update README.md and docs/configuration.md with the latest toolsets
make update-readme-toolsRun make help to see all available targets with descriptions.
When introducing new modules run make tidy so that go.mod and go.sum remain tidy.
- Go modules target Go 1.25 (see
go.mod). - Tests are written with the standard library
testingpackage. - Build, test and lint steps are defined in the Makefile—keep them working.
The docs/ directory contains user-facing documentation:
docs/README.md– Documentation index and navigationdocs/configuration.md– Complete TOML configuration reference (allStaticConfigoptions, drop-in configuration, dynamic reload)docs/prompts.md– MCP Prompts configuration guidedocs/logging.md– MCP Logging guide (automatic K8s error logging, secret redaction)docs/OTEL.md– OpenTelemetry observability setupdocs/KIALI.md– Kiali toolset configurationdocs/getting-started-kubernetes.md– Kubernetes ServiceAccount setupdocs/getting-started-claude-code.md– Claude Code CLI integrationdocs/KEYCLOAK_OIDC_SETUP.md– OAuth/OIDC developer setup
The docs/specs/ directory contains feature specifications (living documentation for coding agents):
docs/specs/validation.md– Pre-execution validation layer specification (resource existence, schema, RBAC)
- Use lowercase filenames for new documentation files (e.g.,
configuration.md,prompts.md) - The toolsets table, tools, prompts, resources, and resource templates in
README.mdanddocs/configuration.mdare auto-generated - usemake update-readme-toolsto update them after modifying toolsets - Both files use marker pairs for the generated content:
<!-- AVAILABLE-TOOLSETS-START -->/<!-- AVAILABLE-TOOLSETS-END -->(toolset summary table)<!-- AVAILABLE-TOOLSETS-TOOLS-START -->/<!-- AVAILABLE-TOOLSETS-TOOLS-END -->(tool details)<!-- AVAILABLE-TOOLSETS-PROMPTS-START -->/<!-- AVAILABLE-TOOLSETS-PROMPTS-END -->(prompt details)<!-- AVAILABLE-TOOLSETS-RESOURCES-START -->/<!-- AVAILABLE-TOOLSETS-RESOURCES-END -->(resource details)<!-- AVAILABLE-TOOLSETS-RESOURCES-TEMPLATES-START -->/<!-- AVAILABLE-TOOLSETS-RESOURCES-TEMPLATES-END -->(resource template details)
The server is distributed as a binary executable, a Docker image, an npm package, and a Python package.
- Native binaries for Linux, macOS, and Windows are available in the GitHub releases.
- A container image (Docker) is built and pushed to the
quay.io/containers/kubernetes_mcp_serverrepository. - An npm package is available at npmjs.com.
It wraps the platform-specific binary and provides a convenient way to run the server using
npx. - A Python package is available at pypi.org.
It provides a script that downloads the correct platform binary from the GitHub releases page and runs it.
It provides a convenient way to run the server using
uvxorpython -m kubernetes_mcp_server.