Service Performance Monitoring is a high-level monitoring dashboard within Layerlog that enables you to monitor your tracing services and operations. This integration allows you to configure Service Performance Monitoring with OpenTelemetry collector and send spans and span metrics from your OpenTelemetry installation to Layerlog.
This integration is currently only available as a Beta version. To enable it for your account, contact Layerlog first.
Architecture overview
This integration is based on OpenTelemetry. It works as an add-on to existing OpenTelemetry installations. If you need to set up OpenTelemetry first, refer to our documentation on OpenTelemetry.
The integration includes:
- Configuring the OpenTelemetry collector to receive spans generated by your application instrumentation and send the spans and span metrics to Layerlog
On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the spans and span metrics data to your Layerlog account.
Set up your locally hosted OpenTelemetry installation to send spans and span metrics to Layerlog
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- A metrics account setup
- An active account with Layerlog
- A Layerlog span metrics account
The span metrics account name should include your tracing account name. For example, if your tracing account name is “tracing”, your metrics account could be named “tracing-metrics”.
Add Layerlog exporter to your OpenTelemetry collector
Add the following parameters to the configuration file of your OpenTelemetry collector:
- Under the
receiverslist:
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
- Under the
exporterslist:
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
- Under the
processorslist:
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
- Under the
service: pipelineslist:
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.
If your account is hosted in any region other than us, replace LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.
* Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
An example configuration file looks as follows:
receivers:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
otlp:
protocols:
grpc:
endpoint: :4317
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
exporters:
logzio:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:
processors:
batch:
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [jaeger]
processors: [spanmetrics,batch]
exporters: [logzio]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [otlp,prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Start the collector
Run the following command:
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64.
Run the application
Run the application to generate traces.
Check Layerlog for your metrics
Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.
Set up your OpenTelemetry installation using containerized collector to send spans and span metrics to Layerlog
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- A metrics account setup
- An active account with Layerlog
- A Layerlog span metrics account
The span metrics account name should include your tracing account name. For example, if your tracing account name is “tracing”, your metrics account could be named “tracing-metrics”.
Pull the Docker image for the OpenTelemetry collector
If you are already running a Layerlog Docker image logzio/otel-collector-traces, the new image logzio/otel-collector-spm will replace it.
In the same Docker network as your application:
docker pull logzio/otel-collector-spm
This integration only works with an otel-contrib image. The logzio/otel-collector-traces image is based on otel-contrib.
Run the container
When running on a Linux host, use the --network host flag to publish the collector ports:
docker run --name logzio-spm \
-e LOGZIO_REGION=<<LOGZIO_ACCOUNT_REGION_CODE>> \
-e LOGZIO_TRACES_TOKEN=<<TRACING-SHIPPING-TOKEN>> \
-e LOGZIO_METRICS_TOKEN=<<SPM-METRICS-SHIPPING-TOKEN>> \
--network host \
logzio/otel-collector-spm
When running on MacOS or Windows hosts, publish the ports using the -p flag:
docker run --name logzio-spm \
-e LOGZIO_REGION=<<LOGZIO_ACCOUNT_REGION_CODE>> \
-e LOGZIO_TRACES_TOKEN=<<TRACING-SHIPPING-TOKEN>> \
-e LOGZIO_METRICS_TOKEN=<<SPM-METRICS-SHIPPING-TOKEN>> \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
logzio/otel-collector-spm
Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.
If your account is hosted in any region other than us, replace LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.
* Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Optional parameters
If required, you can add the following optional parameters as environment variables when running the container:
| Parameter | Description |
|---|---|
| LATENCY_HISTOGRAM_BUCKETS | Comma separated list of durations defining the latency histogram buckets. Default: 2ms, 8ms, 50ms, 100ms, 200ms, 500ms, 1s, 5s, 10s |
| SPAN_METRICS_DIMENSIONS | Each metric will have at least the following dimensions that are common across all spans: Service name, Operation, Span kind, Status code. The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url. Each additional dimension is defined by a name from the span’s collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric. |
| SPAN_METRICS_DIMENSIONS_CACHE_SIZE | The maximum items number of metric_key_to_dimensions_cache. Default: 10000. |
| AGGREGATION_TEMPORALITY | Defines the aggregation temporality of the generated metrics. One of either cumulative or delta. Default: cumulative. |
Run the application
Run the application to generate traces.
Check Layerlog for your metrics
Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.
Overview
You can use a Helm chart to ship metrics and span metrics from your OpenTelemetry installation to Layerlog. The Helm tool is used to manage packages of pre-configured Kubernetes resources that use charts.
logzio-otel-spm allows you to ship traces from your Kubernetes cluster to Layerlog with the OpenTelemetry collector.
This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Layerlog helm charts are logzio-helm.
Standard configuration
Before you begin, you’ll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- A metrics account setup
- An active account with Layerlog
Deploy the Helm chart
If you are already running a Helm chart logzio/otel-collector-traces, the new image logzio/otel-collector-spm will replace it.
Add logzio-helm repo as follows:
helm repo add logzio-helm https://logzio.github.io/logzio-helm/logzio-otel-spm
helm repo update
Define the logzio-otel-traces service name
In most cases, the service name will be logzio-otel-traces.default.svc.cluster.local, where default is the namespace where you deployed the helm chart and svc.cluster.name is your cluster domain name.
If you are not sure what your cluster domain name is, you can run the following command to look it up:
kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'
It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.
Point your traces exporter to logzio-otel-traces
In the instrumentation code of your application, point the exporter to the service name defined in the previous step, for example http://logzio-otel-traces.default.svc.cluster.local:4317.
Run the Helm deployment code
helm install \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=<<TRACING-SHIPPING-TOKEN>> \
--set logzio.metrics_token=<<SPM-METRICS-SHIPPING-TOKEN>> \
logzio-otel-spm logzio-helm/logzio-otel-spm
Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.
If your account is hosted in any region other than us, replace LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.
* Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
<<LOGZIO_ACCOUNT_REGION_CODE>> - Your layerlog.com account region code. Defaults to “us”. Required only if your layerlog.com region is different than US East.
Check Layerlog for your traces
Give your traces some time to get from your system to ours, then open Layerlog.
Customizing Helm chart parameters
Configure customization options
You can use the following options to update the Helm chart parameters:
-
Specify parameters using the
--set key=value[,key=value]argument tohelm install. -
Edit the
values.yaml. -
Overide default values with your own
my_values.yamland apply it in thehelm installcommand.
Example
helm install logzio-otel-traces logzio-helm/logzio-otel-traces -f my_values.yaml
Optional parameters
If required, you can add the following optional parameters as environment variables:
| Parameter | Description |
|---|---|
| config.processors.spanmetrics.latency_histogram_buckets | Comma separated list of durations defining the latency histogram buckets. Default: 2ms, 8ms, 50ms, 100ms, 200ms, 500ms, 1s, 5s, 10s |
| config.processors.spanmetrics.dimensions | Each metric will have at least the following dimensions that are common across all spans: Service name, Operation, Span kind, Status code. The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url. Each additional dimension is defined by a name from the span’s collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric. |
| config.processors.spanmetrics.dimensions_cache_size | The maximum items number of metric_key_to_dimensions_cache. Default: 10000. |
| config.processors.spanmetrics.aggregation_temporality | Defines the aggregation temporality of the generated metrics. One of either cumulative or delta. Default: cumulative. |
Uninstalling the Chart
The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.
To uninstall the logzio-otel-traces deployment, use the following command:
helm uninstall logzio-otel-traces