Collecting Metrics using Spring Boot Actuator and Visualizing them using Prometheus and Grafana
Introduction
This article is part of a series of articles on Metrics in Microservices. In this article, we will discuss how to collect metrics from a Spring Boot application and how to visualize them using Prometheus and Grafana.
This article is the first part of the series.
Metrics is one of main components of the observability stack. It is used to collect, store and visualize time series data. It is used to monitor the health of the system and to debug issues. It is used to track the performance of the system and to plan for capacity. It is used to understand the behavior of the system and to make informed decisions.
In this article, we will look at how to collect metrics from a Spring Boot application and how to visualize them using Prometheus and Grafana.
Spring Boot Application
We need a Spring Boot application for this post. We are going to use the Spring Boot Application that is used for the distributed tracing.
The application can be found at the link below:
Make sure that the two dependencies below are added to build.gradle.kts:
-
implementation("org.springframework.boot:spring-boot-starter-actuator")
-
implementation("io.micrometer:micrometer-registry-prometheus")
I configured properties below to make it to be monitored by Prometheus.
# omited for brevity
management:
endpoint.health.probes.enabled: true
health:
livenessstate.enabled: true
readinessstate.enabled: true
endpoints.web.exposure.include:
- info
- health
- metrics
- prometheus
Please make sure the endpoint /actuator/prometheus
is available.
Prometheus Operator
What is Prometheus Operator?
Prometheus Operator is a Kubernetes Operator that manages Prometheus and Grafana instances. It is a part of the Cloud Native Computing Foundation (CNCF) and is widely used for monitoring Kubernetes clusters.
Install Prometheus Operator
There are several ways to install the Prometheus Operator.
-
Install using YAML files
-
Install using Kube-Prometheus
-
Install using Helm Chart
In this article, we will use the YAML files provided by the Prometheus Operator GitHub repository.
Use default namespace
Run the following commands to install the CRDs and deploy the operator in the default namespace:
Install using YAML files
It is recommended to install the Prometheus Operator in the default namespace.
Run the following commands to install the Prometheus Operator in the default namespace. I downloaded the latest version of the Prometheus Operator bundle from the GitHub repository and saved it to a file named prometheus-operator-bundle.yaml
to look at the contents. But you can apply it directly without saving it to a file.
$ LATEST=$(curl -s https://api.github.com/repos/prometheus-operator/prometheus-operator/releases/latest | jq -cr .tag_name)
$ curl -sL https://github.com/prometheus-operator/prometheus-operator/releases/download/${LATEST}/bundle.yaml > \
prometheus-operator-bundle.yaml
$ kubectl apply --server-side=true -f prometheus-operator-bundle.yaml -n default
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator serverside-applied
clusterrole.rbac.authorization.k8s.io/prometheus-operator serverside-applied
deployment.apps/prometheus-operator serverside-applied
serviceaccount/prometheus-operator serverside-applied
service/prometheus-operator serverside-applied
I used the --server-side=true
flag to install the Prometheus Operator because the file size is greater than 262144 bytes.
Troubleshooting
Namespace issue
If you try to install the Prometheus Operator in a namespace other than the default namespace, you will get the following error message:
the namespace from the provided object "default" does not match the namespace "monitoring". You must pass '--namespace=default' to perform this operation.
File size issue
Because the file size is greater than 262144 bytes, you need to use the --server-side=true
flag to install the Prometheus Operator.
If you try to install it without the --server-side=true
flag, you will get the following error message:
--server-side=true
flag$ kubectl apply -f prometheus-operator-bundle.yaml -n default
Error from server (Invalid): error when creating "prometheus-operator-bundle.yaml":
CustomResourceDefinition.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" is invalid:
metadata.annotations: Too long: must have at most 262144 bytes
...
Service Monitor
This section is in reference to the link below:
For more information about how to install ServiceMonitor, RBAC, Prometheus, refer to the link above.
ServiceMonitor is a custom resource that is used to monitor the services running in the Kubernetes cluster. It is used to scrape the metrics data from the target service.
We need some properties used for the service to create a ServiceMonitor.
-
labels
-
port name
View labels and port name
We used Helm chart to deploy the application, and it added labels to the service. We can use these labels to create a ServiceMonitor.
$ SVC_NAME=nsa2-opentelemetry-example; NS=nsa2; kubectl -n $NS get service $SVC_NAME -o yaml | yq .metadata.labels
app.kubernetes.io/instance: nsa2-opentelemetry-example
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nsa2-opentelemetry-example
app.kubernetes.io/version: 1.16.0
helm.sh/chart: nsa2-opentelemetry-example-0.1.0
I decided to use the app.kubernetes.io/name
label to create the ServiceMonitor.
And port information used in the application is also needed. We can get the port name from the service.
$ SVC_NAME=nsa2-opentelemetry-example; NS=nsa2; kubectl -n $NS get service $SVC_NAME -o yaml | yq '.spec.ports | .[].name'
http
We are going to use the http
port name to create the ServiceMonitor for endpoints.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: nsa2-opentelemetry-example-servicemonitor
namespace: nsa2
labels:
team: nsa2
spec:
selector:
matchLabels:
app.kubernetes.io/name: nsa2-opentelemetry-example
endpoints:
# - port: http
# interval: 30s
# scheme: http
# path: /actuator/prometheus
- port: metrics
interval: 30s
scheme: http
path: /metrics
Note that we are using /actuator/prometheus
as the target endpoint that provides the metrics data by the Spring Boot Actuator. You can see the labels and port name in the ServiceMonitor configuration. The app.kubernetes.io/name
label is used to select the target service in matchLabels.
$ kubectl apply -n nsa2 -f nsa2-opentelemetry-example-servicemonitor.yaml
servicemonitor.monitoring.coreos.com/nsa2-opentelemetry-example-servicemonitor created
We created a ServiceMonitor named nsa2-opentelemetry-example-servicemonitor
in the namespace nsa2
. The ServiceMonitor is used to scrape the metrics data from the target service.
I added a label team: nsa2
to the ServiceMonitor to select the target service. Prometheus running on nsa2 namespace will scrape the metrics from all related services using ServiceMonitors having team: nsa2
label.
You can add more labels to the ServiceMonitor to select the target service.
Deploy Prometheus
Now that we have the ServiceMonitor, we can deploy Prometheus to scrape the metrics data from the target service. We need proper permissions to create Prometheus resources. We are going to create RBAC resources first.
Create RBAC
We need to create the following RBAC resources below.
-
ServiceAccount
-
ClusterRole
-
ClusterRoleBinding
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: nsa2
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-nsa2
namespace: nsa2
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-nsa2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-nsa2
subjects:
- kind: ServiceAccount
name: prometheus
namespace: nsa2
$ kubectl -n nsa2 apply -f prometheus-rbac.yaml
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
We created a ServiceAccount named prometheus
which is used to run the Prometheus pod. We also created a ClusterRole named prometheus
and a ClusterRoleBinding named prometheus
to give the proper permissions to the ServiceAccount.
Create Prometheus
We need to create a Prometheus resource to deploy Prometheus. We are going to use the Prometheus configuration file below.
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: nsa2
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: nsa2
resources:
requests:
memory: 400Mi
enableAdminAPI: false
# for more information on the following configuration,
# see
# - https://prometheus-operator.dev/docs/api-reference/api/
# - https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
# /api/v1/write (web.enable-remote-write-receiver)
# /api/v1/otlp/v1/metrics (web.enable-otlp-receiver)
additionalArgs:
- name: "web.enable-remote-write-receiver"
value: ""
- name: "web.enable-otlp-receiver"
value: ""
# according to log message, the following configuration is not working.
# enableFeatures:
# - remote-write-receiver
# - otlp-write-receiver
In the Prometheus configuration, we are using the ServiceMonitor that we created earlier. It is configured in serviceMonitorSelector section with the label of team: nsa2
.
The Prometheus Operator automatically discovers the ServiceMonitor and scrapes the metrics data.
We are going to create a Prometheus resource in namespace nsa2
to scrape the metrics data from the target service.
$ kubectl apply -n nsa2 -f prometheus.yaml
prometheus.monitoring.coreos.com/prometheus created
The service monitor that we created earlier is used in the Prometheus configuration. Service Monitor provides the target service information to Prometheus. The Prometheus Operator automatically discovers the ServiceMonitor and scrapes the metrics data.
Adding Persistent Volume
Refer to the link below to add a Persistent Volume to the Prometheus deployment.
Exposing the Prometheus service
We can expose the Prometheus service to access the Prometheus UI. Run the command below to expose the Prometheus service.
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: nsa2
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus
$ kubectl apply -n nsa2 -f prometheus-service.yaml
service/prometheus created
Now that we have Prometheus running, we can access the Prometheus UI by port-forwarding the Prometheus service.
$ kubectl -n nsa2 port-forward service/prometheus 9090:9090
Now you can access the Prometheus UI at http://localhost:9090.
Execute Status → Targets to see the targets that Prometheus is scraping. You can see the target that we created earlier. Or navigate http://localhost:9090/targets to see the targets.

Scale out the target service
Because we are using ServiceMonitor, Prometheus automatically discovers the pods of the target service and scrapes the metrics data.
Let’s execute the command below to scale out the target service to see how Prometheus discovers the new pods.
$ kubectl -n nsa2 scale deployment nsa2-opentelemetry-example --replicas=3
Now you can see the new pods in the Prometheus UI.

Prometheus Graph
You can use the Prometheus UI to create a graph to visualize the metrics data. Navigate to the Graph tab and enter the query below to see the metrics data. Or use http://localhost:9090/graph to see the graph.
Click Metrics Explorer
button to see the available metrics which is located left side of the Execute button.

To visualize the metrics data, enter the query below in the query input box. Here is an example of 'process_cpu_usage' metric.

References
Grafana
What is Grafana?
Grafana is an open-source analytics and monitoring platform. It is used to visualize time series data. It is used to monitor the health of the system and to debug issues. It is used to track the performance of the system and to plan for capacity. It is used to understand the behavior of the system and to make informed decisions.
Install Grafana using Helm
This section is in reference to the link below:
$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
$ helm search repo grafana
$ kubectl get namespace nsa2 || kubectl create namespace nsa2
The command above will create a namespace named nsa2
if it does not exist.
$ helm search repo grafana/grafana
NAME CHART VERSION APP VERSION DESCRIPTION
grafana/grafana 8.4.4 11.1.3 The leading tool for querying and visualizing t...
grafana/grafana-agent 0.42.0 v0.42.0 Grafana Agent
grafana/grafana-agent-operator 0.4.1 0.42.0 A Helm chart for Grafana Agent Operator
grafana/grafana-sampling 0.1.1 v0.40.2 A Helm chart for a layered OTLP tail sampling a...
values.yaml
We are going to create a file named nsa2-grafana-values.yaml
to store the values of the Grafana Helm chart.
$ helm show values grafana/grafana > grafana-opensearch-values.yaml
We saved the values of the Grafana Helm chart to a file named grafana-values.yaml
. We will see the whole configuration of the Grafana Helm chart in the file.
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
nodeSelector:
agentpool: depnodes
persistence:
enabled: true
admin:
existingSecret: grafana-admin-credentials
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus:9090
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
isDefault: true
The configuration file above includes the values that we are going to use to install Grafana. To save data in the persistent volume, we set the persistence.enabled
to true
. We also set the datasources.datasources.yaml
to configure the Prometheus data source. We set the admin.existingSecret
to grafana-admin-credentials
to store the admin password in the secret. We will create the secret in the next step.
Create a secret for the admin password
We are going to create a secret to store the admin password for Grafana. The admin user is admin
and the password is admin
for this example. You can change the password to a more secure one.
$ kubectl create secret generic grafana-admin-credentials \
--from-literal=admin-user=admin \
--from-literal=admin-password=admin -n nsa2 \
--dry-run=client -o yaml \
| yq eval 'del(.metadata.creationTimestamp)' \
> grafana-admin-credentials.yaml
$ kubectl apply -f grafana-admin-credentials.yaml
We created a secret named grafana-admin-credentials
to store the admin password for Grafana. The admin user is admin
and the password is admin
for this example.
Install Grafana
Run the commands below to install the Grafana Helm chart.
$ helm install grafana grafana/grafana --namespace nsa2 -f nsa2-grafana-opensearch-values.yaml
Uninstall Grafana
Run the commands below to uninstall the Grafana Helm chart.
$ helm uninstall grafana -n nsa2
$ kubectl -n nsa2 delete secret grafana-admin-credentials
$ kubectl -n nsa2 delete pvc -l app.kubernetes.io/name=grafana
Expose Grafana
$ kubectl -n nsa2 port-forward svc/grafana 3000:80
You can access Grafana at http://localhost:3000.

Use the admin user and password to sign in that are stored in the secret grafana-admin-credentials
.

We can see prometheus as a data source in the image above which is configured in the nsa2-grafana-values.yaml
file.
Here is an example of a dashboard that shows the CPU usage of the process.

Forced to change the password
The password for the admin user is admin
by default. We are forced to change the password when we log in for the first time. It is changeme
as of 2024-09-23.
Conclusion
In this article, we explored the process of collecting metrics from a Spring Boot application, providing detailed steps on how to monitor and analyze the application’s performance. We guided you through the installation of Prometheus and Grafana on a Kubernetes cluster, explaining how Prometheus gathers metrics data and Grafana visualizes it. Additionally, we demonstrated how to create dashboards in Grafana to effectively interpret and track the collected metrics, helping you gain a deeper understanding of your application’s behavior and resource usage.