Seven years is a long time in observability. Since Prometheus 2.0 landed in 2017, the ecosystem has been transformed by cloud-native adoption, the rise of distributed tracing, and the emergence of OpenTelemetry as the de facto standard for instrumentation. Prometheus 3.0, released in November 2024, is the project’s answer to that transformation — and its most significant change is the native ability to ingest OpenTelemetry metrics directly, without an intermediary collector standing in the way.
This article goes deep on what Prometheus 3.0 actually changes for platform engineers and cloud architects who are running — or planning to run — OTel-instrumented workloads alongside Prometheus-based monitoring stacks. We will cover the native OTLP ingestion endpoint, UTF-8 metric name support, Remote Write 2.0, migration considerations, and the architectural patterns that still make sense even when native OTLP is available.
What Changed in Prometheus 3.0: The OTel-Relevant Picture
Prometheus 3.0 ships a substantial set of changes. Not all of them are equally relevant to OpenTelemetry integration, so let’s focus on what actually moves the needle for OTel users before diving into each area in detail.
Native OTLP Ingestion
The flagship feature: Prometheus 3.0 ships with a built-in OTLP receiver that exposes an HTTP endpoint accepting metrics in the OpenTelemetry Protocol format. Applications instrumented with any OTel SDK can now push metrics directly to Prometheus without routing through an OpenTelemetry Collector. This is not a sidecar, not a plugin, not an external adapter — it is a first-class endpoint in the Prometheus binary itself.
UTF-8 Metric Names
Prometheus historically restricted metric names to [a-zA-Z_:][a-zA-Z0-9_:]*. OpenTelemetry uses dots and slashes in metric names by convention — http.server.request.duration is a canonical OTel metric name. Prometheus 3.0 lifts this restriction and supports arbitrary UTF-8 characters in metric names and label names, which is the single most important compatibility change for OTel interoperability.
Remote Write 2.0
Remote Write 2.0
Remote Write 2.0 replaces the original protocol with a more efficient encoding based on protobuf, adds native histogram support in the wire format, and reduces bandwidth consumption significantly for large-scale deployments. If you are federating metrics to Thanos, Mimir, or Cortex, this matters for operational cost.
New UI
The Prometheus web UI has been completely rewritten. The new UI uses React, supports metric metadata exploration, and provides a significantly improved query-building experience. This is a quality-of-life improvement rather than an architectural change, but it reduces the dependency on external tools like Grafana for ad-hoc investigation.
Breaking Changes Summary
Prometheus 3.0 removes several features that were deprecated in 2.x. The most operationally significant are: removal of the --web.enable-admin-api deprecated flag path, removal of certain legacy storage format options, changes to default scrape timeouts, and stricter validation of configuration that was previously silently accepted. We cover a migration checklist later in this article.
The OTLP Receiver: How It Works and What It Accepts
The OTLP receiver in Prometheus 3.0 is implemented as an optional feature that must be explicitly enabled. Once enabled, it exposes an HTTP endpoint at /api/v1/otlp/v1/metrics that accepts protobuf-encoded OTLP ExportMetricsServiceRequest payloads — the same wire format used by the OpenTelemetry Collector’s OTLP exporter.
What It Accepts (and What It Does Not)
This is critical to understand before you architect around native OTLP ingestion: Prometheus 3.0 OTLP support is metrics-only. It does not accept traces or logs. OTLP is a unified protocol covering all three signals, but Prometheus is a metrics store — the receiver handles only the metrics portion of the OTLP specification.
Supported metric types in the OTLP receiver:
- Gauge — maps directly to a Prometheus Gauge
- Sum (monotonic) — maps to a Prometheus Counter
- Sum (non-monotonic) — maps to a Prometheus Gauge
- Histogram (explicit bucket) — maps to a Prometheus Histogram
- ExponentialHistogram — maps to Prometheus Native Histograms (a 3.0 feature)
- Summary — maps to a Prometheus Summary
Resource attributes from the OTLP payload — things like service.name, k8s.pod.name, cloud.region — are converted to Prometheus labels. This conversion is configurable, and by default Prometheus applies a promotion strategy that converts the most common resource attributes to labels while discarding ones that would create extremely high cardinality.
Enabling the OTLP Receiver
Enabling native OTLP ingestion requires two things: a feature flag and a configuration block in prometheus.yml.
Start the Prometheus binary with the feature flag:
prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--enable-feature=otlp-write-receiverThen add the OTLP receiver configuration to your prometheus.yml:
# prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
otlp:
# Promote these OTLP resource attributes to Prometheus labels
promote_resource_attributes:
- service.name
- service.namespace
- service.instance.id
- k8s.namespace.name
- k8s.pod.name
- k8s.node.name
- cloud.region
- deployment.environmentWith this configuration, Prometheus will listen on port 9090 (default) and accept OTLP metrics at http://<prometheus-host>:9090/api/v1/otlp/v1/metrics.
Resource Attribute Promotion Strategy
The promote_resource_attributes list deserves careful thought. OTLP carries rich resource-level context — every metric payload includes a ResourceMetrics object with attributes describing the source: service name, version, environment, Kubernetes pod, node, cluster, cloud provider details, and more. Prometheus labels are flat key-value pairs on each time series. Promoting too many resource attributes explodes cardinality; promoting too few loses important context.
A pragmatic starting list for Kubernetes deployments:
otlp:
promote_resource_attributes:
- service.name # Critical: identifies the service
- service.namespace # Logical grouping
- deployment.environment # prod/staging/dev
- k8s.namespace.name # Kubernetes namespace
- k8s.pod.name # Pod-level cardinality — consider omitting in high-scale
- k8s.node.name # Useful for infrastructure correlationAvoid blindly promoting k8s.pod.name at scale — in a cluster with thousands of short-lived pods, this creates significant cardinality pressure. Prefer service.name and service.namespace for most alerting use cases, reserving pod-level labels for debugging dashboards.
UTF-8 Metric Names: Why This Is the Real Game-Changer
To appreciate why UTF-8 metric name support matters so much, you need to understand the friction it eliminates. OpenTelemetry semantic conventions define metric names using dots as namespace separators. The canonical HTTP server duration metric is http.server.request.duration. The canonical database query duration is db.client.operation.duration. These names are standardized across languages and frameworks — your Go service and your Java service and your Python service all emit the same metric name when instrumented with OTel.
Prometheus 2.x could not store these names. The dots are illegal characters in Prometheus metric naming. Every OTel-to-Prometheus bridge — the OpenTelemetry Collector’s Prometheus exporter, prom-client compatibility layers, the older prometheusremotewrite exporter — had to translate these names, typically by replacing dots with underscores: http_server_request_duration.
This translation is lossy and creates multiple problems:
- Name collisions:
http.server.request_durationandhttp.server.request.durationboth becomehttp_server_request_duration - Dashboard breakage: Grafana dashboards built against OTel semantic conventions don’t work against translated Prometheus metrics without modification
- Cross-signal correlation: Trace attributes use dot notation; when metric names differ, automated correlation tools lose the thread
- Vendor lock-in pressure: Teams end up with separate naming conventions for “Prometheus metrics” vs “OTel metrics” and maintain both
Prometheus 3.0 with UTF-8 support stores http.server.request.duration natively. No translation. No collision. The metric name you instrument with is the metric name you query.
Enabling UTF-8 Metric Names
UTF-8 metric names require the utf8-names feature flag:
prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--enable-feature=utf8-names \
--enable-feature=otlp-write-receiverOnce enabled, PromQL queries must use quoted metric names when the name contains characters outside the legacy character set:
# Legacy metric name — unquoted works fine
http_server_requests_total
# OTel metric name with dots — requires quoting in PromQL
{"__name__"="http.server.request.duration"}
# Or using the new PromQL syntax in Prometheus 3.0
http.server.request.duration{service_name="api-gateway"}The PromQL parser in Prometheus 3.0 has been updated to handle quoted metric names as a first-class construct. Grafana’s PromQL engine has also been updated to handle this syntax — verify your Grafana version (10.3+ has full support) before deploying.
OTel SDK to Prometheus 3.0 Directly: No Collector Required
For teams that only need to get application metrics into Prometheus, native OTLP ingestion enables a dramatically simpler architecture. Here’s what it looks like with different OTel SDKs.
Go (OpenTelemetry SDK)
package main
import (
"context"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
"go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
func initMetrics(ctx context.Context) (*metric.MeterProvider, error) {
res, err := resource.New(ctx,
resource.WithAttributes(
semconv.ServiceName("my-api"),
semconv.ServiceNamespace("platform"),
semconv.DeploymentEnvironment("production"),
),
)
if err != nil {
return nil, err
}
// Point directly at Prometheus 3.0 OTLP endpoint
exporter, err := otlpmetrichttp.New(ctx,
otlpmetrichttp.WithEndpoint("prometheus:9090"),
otlpmetrichttp.WithURLPath("/api/v1/otlp/v1/metrics"),
otlpmetrichttp.WithInsecure(), // Use WithTLSClientConfig for production
)
if err != nil {
return nil, err
}
provider := metric.NewMeterProvider(
metric.WithResource(res),
metric.WithReader(
metric.NewPeriodicReader(exporter,
metric.WithInterval(30*time.Second),
),
),
)
otel.SetMeterProvider(provider)
return provider, nil
}Python (OpenTelemetry SDK)
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME, SERVICE_NAMESPACE
resource = Resource.create({
SERVICE_NAME: "my-api",
SERVICE_NAMESPACE: "platform",
"deployment.environment": "production",
})
exporter = OTLPMetricExporter(
endpoint="http://prometheus:9090/api/v1/otlp/v1/metrics",
)
reader = PeriodicExportingMetricReader(
exporter,
export_interval_millis=30_000,
)
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)
# Use the meter
meter = metrics.get_meter("my-api")
request_counter = meter.create_counter(
name="http.server.request.count",
description="Total HTTP server requests",
unit="1",
)
request_duration = meter.create_histogram(
name="http.server.request.duration",
description="HTTP server request duration",
unit="s",
)Java (OpenTelemetry SDK with Spring Boot)
# application.properties (Spring Boot with OTel auto-instrumentation)
otel.service.name=my-api
otel.resource.attributes=service.namespace=platform,deployment.environment=production
# Configure OTLP exporter to push directly to Prometheus
otel.metrics.exporter=otlp
otel.exporter.otlp.metrics.endpoint=http://prometheus:9090/api/v1/otlp/v1/metrics
otel.exporter.otlp.metrics.protocol=http/protobuf
# Export interval
otel.metric.export.interval=30000With Spring Boot and the OTel Java agent, no code changes are required beyond configuration — the agent instruments your HTTP server, database clients, and messaging systems automatically and pushes metrics using the names defined in OTel semantic conventions.
OTel Collector to Prometheus 3.0: When You Need the Intermediary
Native OTLP ingestion is compelling, but the OpenTelemetry Collector remains relevant for a significant set of use cases. Understanding when each pattern is appropriate is the core architectural decision you will face when adopting Prometheus 3.0 in an OTel environment.
Pattern 1: OTel Collector as Fan-Out Gateway
When you need to send metrics to multiple backends simultaneously — Prometheus for alerting, a long-term store like Thanos for historical analysis, and a commercial observability platform for full-stack correlation — the OTel Collector handles fan-out efficiently. Applications push once to the Collector; the Collector distributes to all backends.
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 10s
send_batch_size: 1000
memory_limiter:
check_interval: 1s
limit_mib: 512
exporters:
# Push to Prometheus 3.0 via OTLP
otlphttp/prometheus:
endpoint: http://prometheus:9090/api/v1/otlp
tls:
insecure: true
# Fan-out to Thanos via remote_write
prometheusremotewrite/thanos:
endpoint: http://thanos-receive:10908/api/v1/receive
resource_to_telemetry_conversion:
enabled: true
# Fan-out to commercial backend
otlp/datadog:
endpoint: https://otel-intake.datadoghq.com
headers:
DD-API-KEY: "${DD_API_KEY}"
service:
pipelines:
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp/prometheus, prometheusremotewrite/thanos, otlp/datadog]Pattern 2: Collector for Metric Transformation
The OTel Collector’s transform processor and metricstransform processor allow you to reshape metrics before they reach Prometheus: rename labels, add static attributes, filter out high-cardinality series, aggregate metrics to reduce storage cost, or apply unit conversions. These operations are not available in Prometheus’s native OTLP receiver.
processors:
transform/metrics:
metric_statements:
- context: metric
statements:
# Drop internal debug metrics
- delete_matching_keys(attributes, "internal.*")
# Normalize environment label values
- set(attributes["deployment.environment"], "prod")
where attributes["deployment.environment"] == "production"
filter/drop_debug:
metrics:
exclude:
match_type: regexp
metric_names:
- ".*\.debug\..*"
- "runtime\.go\.internal\..*"
metricstransform:
transforms:
# Rename a metric to match your existing Prometheus naming convention
- include: http.server.request.duration
action: update
new_name: http_server_request_duration_secondsPattern 3: Collector for Traces and Logs (Always Required)
If your architecture includes traces and logs alongside metrics — and in 2025 it almost certainly does — you need an OTel Collector regardless of what you do with metrics. Prometheus does not accept traces or logs. Jaeger, Tempo, and Loki all have their own ingestion protocols. The Collector is the universal routing layer for the three pillars of observability.
In this architecture, it is usually simpler to route all three signals through the Collector and let it push metrics to Prometheus via OTLP or remote_write, rather than splitting metrics to go directly and everything else through the Collector.
When to Use Native OTLP vs. OTel Collector: Decision Framework
| Scenario | Native OTLP | OTel Collector |
|---|---|---|
| Single metrics backend (Prometheus only) | Preferred | Overkill |
| Multiple metrics backends | Not sufficient | Required |
| Traces + Logs in scope | Not applicable | Required |
| Metric transformation/filtering needed | Not supported | Required |
| Simple Kubernetes-native deployment | Preferred | Additional complexity |
| Air-gapped / constrained environments | Preferred (fewer components) | Consider carefully |
| Mixed OTel + legacy Prometheus targets | Works alongside scraping | Can normalize naming |
| High-volume, need batching/buffering | Limited control | Preferred |
The pragmatic recommendation for most platform engineering teams: if you are already running the OTel Collector (and you should be if traces are in scope), continue routing metrics through it. Use the Collector’s otlphttp exporter to push to Prometheus 3.0. Reserve the direct SDK-to-Prometheus pattern for simple services where the Collector would be the only reason to add complexity.
Remote Write 2.0: What Changes for Existing Setups
Remote Write 2.0 is a significant protocol upgrade with real operational implications for teams using Prometheus as a metrics source for long-term storage systems like Thanos, Mimir, VictoriaMetrics, or Cortex.
Key Protocol Changes
- Protobuf encoding with snappy compression — replacing the previous text-based format. Typically 50-70% reduction in wire size for large metric batches
- Native histogram support in the wire format — exponential histograms can now be forwarded without converting to classic histograms, preserving full resolution
- Metadata forwarding — metric type and unit information is now transmitted alongside samples, enabling better downstream processing
- Created timestamps — the timestamp at which a counter was created is forwarded, enabling more accurate rate calculations across restarts
Configuring Remote Write 2.0
# prometheus.yml
remote_write:
- url: "http://thanos-receive:10908/api/v1/receive"
# Remote Write 2.0 is negotiated automatically with compatible receivers
# Force RW2.0 explicitly if needed:
send_native_histograms: true
metadata_config:
send: true
send_interval: 1m
queue_config:
capacity: 10000
max_shards: 200
max_samples_per_send: 2000
batch_send_deadline: 5sRemote Write 2.0 uses protocol content negotiation — Prometheus 3.0 will attempt RW2.0 first and fall back to RW1.0 if the receiver does not support it. This means upgrades are generally backward-compatible. Verify that your receiving system (Thanos Receive 0.35+, Mimir 2.12+, VictoriaMetrics 1.98+) supports RW2.0 before expecting the benefits.
Migration from Prometheus 2.x: Breaking Changes Checklist
Upgrading from Prometheus 2.x to 3.0 requires attention to several breaking changes. This checklist covers the operationally significant ones for teams running production Prometheus deployments.
Configuration Changes
- Removed:
query.lookback-deltadefault change — the default changed from 5 minutes to match the scrape interval. Queries that relied on the 5m default may return different results. Audit alerting rules that use instant queries on counters. - Removed: deprecated
remote_writeoptions —remote_write[].queue_config.capacitysemantics changed. Review and update queue configurations. - Removed:
storage.tsdb.allow-overlapping-blocksflag — overlapping blocks handling is now automatic. Remove this flag from your startup scripts. - Scrape protocols default change — Prometheus 3.0 defaults to OpenMetrics format for scraping when targets support it. This enables native histograms but may surface parsing differences. Test with
--enable-feature=no-default-scrape-portremoved if you relied on the old behavior. - Agent mode changes — if using Prometheus Agent mode, review the updated configuration options for WAL management.
PromQL Changes
- Stricter parsing — some previously accepted but technically invalid PromQL expressions now fail. Run your alerting rules through
promtool check rulesagainst a Prometheus 3.0 binary before cutover. - Native histogram functions — new functions like
histogram_fraction()andhistogram_quantile()have updated behavior with native histograms. Existing dashboard queries usinghistogram_quantile()on classic histograms continue to work unchanged.
Storage Compatibility
Prometheus 3.0 can read existing 2.x TSDB data. The upgrade path does not require a data migration. However, Prometheus 2.x cannot read data blocks written by 3.0 (downgrade is not supported without data loss after any writes have occurred). Take a snapshot before upgrading if you need rollback capability:
# Take a TSDB snapshot before upgrading
curl -X POST http://prometheus:9090/api/v1/admin/tsdb/snapshot
# Verify the snapshot exists
ls /prometheus/snapshots/Pre-Upgrade Validation Steps
# 1. Validate configuration against Prometheus 3.0
docker run --rm -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus:v3.0.0 \
promtool check config /etc/prometheus/prometheus.yml
# 2. Validate alerting rules
docker run --rm -v $(pwd)/rules:/etc/prometheus/rules \
prom/prometheus:v3.0.0 \
promtool check rules /etc/prometheus/rules/*.yml
# 3. Run in parallel (shadow mode) before full cutover
# Deploy Prometheus 3.0 alongside 2.x, scraping the same targets
# Compare query results between versions using promtool query rangePractical Kubernetes Deployment Example
Here is a production-ready Kubernetes deployment of Prometheus 3.0 with OTLP ingestion enabled, suitable as a starting point for platform engineering teams.
Prometheus 3.0 ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
cluster: production
region: eu-west-1
otlp:
promote_resource_attributes:
- service.name
- service.namespace
- deployment.environment
- k8s.namespace.name
- k8s.pod.name
rule_files:
- /etc/prometheus/rules/*.yml
scrape_configs:
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: "true"
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
remote_write:
- url: http://thanos-receive.monitoring.svc.cluster.local:10908/api/v1/receive
send_native_histograms: true
metadata_config:
send: truePrometheus 3.0 Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: prom/prometheus:v3.0.0
args:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus/data
- --storage.tsdb.retention.time=15d
- --web.enable-lifecycle
- --web.enable-admin-api
- --enable-feature=otlp-write-receiver
- --enable-feature=utf8-names
- --enable-feature=native-histograms
ports:
- name: http
containerPort: 9090
protocol: TCP
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: data
mountPath: /prometheus/data
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 2000m
memory: 8Gi
livenessProbe:
httpGet:
path: /-/healthy
port: http
initialDelaySeconds: 30
periodSeconds: 15
readinessProbe:
httpGet:
path: /-/ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: prometheus-config
- name: data
persistentVolumeClaim:
claimName: prometheus-data
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: monitoring
spec:
selector:
app: prometheus
ports:
- name: http
port: 9090
targetPort: http
type: ClusterIPConfiguring Applications to Push OTLP
With this deployment, any application in the cluster can push OTLP metrics by setting the following environment variables (works with any OTel SDK supporting OTLP HTTP):
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
- name: OTEL_SERVICE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OTEL_METRICS_EXPORTER
value: "otlp"
- name: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
value: "http://prometheus.monitoring.svc.cluster.local:9090/api/v1/otlp/v1/metrics"
- name: OTEL_EXPORTER_OTLP_METRICS_PROTOCOL
value: "http/protobuf"
- name: OTEL_METRIC_EXPORT_INTERVAL
value: "30000"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "deployment.environment=production,k8s.namespace.name=$(NAMESPACE)"This approach works particularly well in environments using the OTel Operator for Kubernetes, where the Instrumentation CRD can inject these environment variables automatically into pods based on namespace or pod label selectors — zero-touch instrumentation with native Prometheus storage.
Frequently Asked Questions
Can I use Prometheus 3.0 OTLP ingestion for traces and logs?
No. The Prometheus 3.0 OTLP receiver handles only metrics. Prometheus is a metrics store — it has no data model for traces or logs. For traces, you need a backend like Jaeger or Grafana Tempo. For logs, you need Loki, Elasticsearch, or a similar system. The OTel Collector is the appropriate routing layer when you need to send all three signals to their respective backends from a single application-side push endpoint.
Does the kube-prometheus-stack Helm chart support Prometheus 3.0?
Yes, with caveats. The kube-prometheus-stack chart updated its Prometheus image to 3.0 starting with chart version 66.0.0. However, some bundled recording rules and alerting rules may need adjustment for the PromQL changes and default behavioral differences. The Prometheus Operator itself (version 0.78+) has been updated to support the new configuration options including the otlp configuration block. If you are managing Prometheus via the Operator, you will configure OTLP settings through the Prometheus CRD’s spec.additionalArgs and a custom PrometheusConfiguration resource.
What happens to existing Prometheus 2.x metric names when I enable UTF-8 support?
Existing metrics with underscore-based names continue to work exactly as before. Enabling UTF-8 support is purely additive — it allows the storage and querying of metric names containing dots and other UTF-8 characters, but it does not rename or modify existing metrics. Your existing dashboards, alerting rules, and recording rules continue to function without modification. Only metrics ingested via OTLP (or exposed by exporters using OTel naming conventions) will use dot-separated names.
How does native OTLP ingestion affect Prometheus’s pull model?
It coexists with it. Prometheus 3.0 continues to scrape targets via the pull model on the same 9090 port. The OTLP endpoint is an additional ingestion path, not a replacement for scraping. You can have a Prometheus instance simultaneously scraping Kubernetes pods via service discovery and receiving OTLP push metrics from applications — both are stored in the same TSDB and queryable via the same PromQL interface. This hybrid approach is common during migrations, where legacy components are scraped and new OTel-instrumented services push via OTLP.
Is the Prometheus 3.0 OTLP receiver suitable for high-volume production workloads?
For moderate volumes, yes. The OTLP receiver is synchronous — the HTTP request completes only after the samples are written to the WAL. Under very high ingestion rates (hundreds of thousands of samples per second), this can create back-pressure that affects application latency. The OTel Collector handles this better through internal buffering, retry queues, and batch processing. For high-volume scenarios, the recommended pattern is: applications push to OTel Collector (which acknowledges immediately and buffers), Collector pushes to Prometheus via OTLP or remote_write in optimized batches. For the majority of Kubernetes workloads — dozens to hundreds of services with typical metric cardinality — the native OTLP receiver performs well without an intermediary.