Problem
When using OpenTelemetry for distributed tracing (either via OTLPIntegration with traces_sample_rate=0 or via instrumenter="otel" + SentrySpanProcessor), Sentry's legacy profiling (profiles_sample_rate) stops working. Profiles are created but contain 0 samples and are discarded by Profile.valid().
Presumed root cause
The profiler lifecycle (Profile.start() / Profile.stop()) is managed by framework integrations (DjangoIntegration, CeleryIntegration) which use Profile as a context manager (__enter__/__exit__). When using OTel for tracing:
OTLPIntegration with traces_sample_rate=0: No Sentry transactions are created, so the profiler never activates - scope.start_transaction() is never called.
instrumenter="otel" + SentrySpanProcessor: Transactions ARE created via start_transaction(instrumenter="otel"), and Profile objects are attached (transaction._profile), but profile.start() is never called because DjangoIntegration/CeleryIntegration return NoOpSpan and never enter the Profile context manager.
The result: Profile.unique_samples == 0 → Profile.valid() returns False → profile is discarded.
Relevant code paths
Continuous profiling (profile_lifecycle="trace") also affected
try_profile_lifecycle_trace_start() is called inside scope.start_transaction(), which requires transaction.sampled=True. With OTLPIntegration + traces_sample_rate=0, this is never reached.
Our use case
We're migrating from sentry-sdk native tracing to OpenTelemetry for distributed tracing across services (PsycopgInstrumentor, HTTPXClientInstrumentor, CeleryInstrumentor, RequestsInstrumentor, custom ViewNameSpanMiddleware). We want to keep Sentry's profiling working alongside OTel tracing - similar to how sentry-ruby supports this via config.instrumenter = :otel.
Our workaround
We use instrumenter="otel" + SentrySpanProcessor + two monkey-patches in our OTel bootstrap:
# Patch start_transaction to call profile.start() after the Profile is created:
_orig_start_tx = sentry_sdk.start_transaction
def _start_transaction_with_profile(*args, **kwargs):
tx = _orig_start_tx(*args, **kwargs)
profile = getattr(tx, '_profile', None)
if profile is not None and not profile.active:
profile.start()
return tx
sentry_sdk.start_transaction = _start_transaction_with_profile
# Also patch the already-imported reference in span_processor module
sentry_sdk.integrations.opentelemetry.span_processor.start_transaction = _start_transaction_with_profile
# Patch Transaction.finish to call profile.stop() before sending:
_orig_finish = Transaction.finish
def _finish_with_profile_stop(self, *args, **kwargs):
profile = getattr(self, '_profile', None)
if profile is not None and profile.active:
profile.stop()
return _orig_finish(self, *args, **kwargs)
Transaction.finish = _finish_with_profile_stop
This works but is fragile - it depends on instrumenter="otel" (marked as internal-only) and monkey-patches SDK internals.
Proposed fix
SentrySpanProcessor should manage the Profile lifecycle when creating transactions from OTel spans. Specifically, on_start should call profile.start() and on_end should call profile.stop() before transaction.finish().
Note: the abandoned 3.0.0a1 branch (https://github.com/getsentry/sentry-python/blob/3.0.0a1/sentry_sdk/opentelemetry/span_processor.py) had this fixed - it called profile.__enter__()/__exit__() directly. This fix was lost when the POTel-based 3.0 was #4955.
Environment
- sentry-sdk 2.54.0
- Python 3.12
- Django + Celery + psycopg + httpx
Question about instrumenter parameter future
Our workaround relies on instrumenter="otel", which is https://github.com/getsentry/sentry-python/blob/master/sentry_sdk/consts.py with a note that it will be removed in the next major version. However, we noticed it was only actually removed in the https://github.com/getsentry/sentry-python/blob/3.0.0a1/sentry_sdk/consts.py and remains on master.
Is instrumenter still planned for removal? If so, what would be the recommended path for users who need both OTel tracing and Sentry profiling? The current OTLPIntegration path has no way to trigger the profiler since it bypasses scope.start_transaction() entirely.
Problem
When using OpenTelemetry for distributed tracing (either via
OTLPIntegrationwithtraces_sample_rate=0or viainstrumenter="otel"+SentrySpanProcessor), Sentry's legacy profiling (profiles_sample_rate) stops working. Profiles are created but contain 0 samples and are discarded byProfile.valid().Presumed root cause
The profiler lifecycle (
Profile.start()/Profile.stop()) is managed by framework integrations (DjangoIntegration,CeleryIntegration) which useProfileas a context manager (__enter__/__exit__). When using OTel for tracing:OTLPIntegrationwithtraces_sample_rate=0: No Sentry transactions are created, so the profiler never activates -scope.start_transaction()is never called.instrumenter="otel"+SentrySpanProcessor: Transactions ARE created viastart_transaction(instrumenter="otel"), andProfileobjects are attached (transaction._profile), butprofile.start()is never called becauseDjangoIntegration/CeleryIntegrationreturnNoOpSpanand never enter the Profile context manager.The result:
Profile.unique_samples == 0→Profile.valid()returns False → profile is discarded.Relevant code paths
Profileand attaches to transaction, but does NOT callprofile.start()profile.start()is called, which registers the profile with theThreadSchedulerviascheduler.start_profiling(self)profile.stop()profile.valid()and attaches to envelope, but does NOT callprofile.stop()(assumes context manager already did)start_transaction()but bypasses the Profile context managerContinuous profiling
(profile_lifecycle="trace")also affectedtry_profile_lifecycle_trace_start()is called insidescope.start_transaction(), which requirestransaction.sampled=True. WithOTLPIntegration+traces_sample_rate=0, this is never reached.Our use case
We're migrating from sentry-sdk native tracing to OpenTelemetry for distributed tracing across services (
PsycopgInstrumentor,HTTPXClientInstrumentor,CeleryInstrumentor,RequestsInstrumentor, customViewNameSpanMiddleware). We want to keep Sentry's profiling working alongside OTel tracing - similar to how sentry-ruby supports this viaconfig.instrumenter = :otel.Our workaround
We use
instrumenter="otel"+SentrySpanProcessor+ two monkey-patches in our OTel bootstrap:This works but is fragile - it depends on
instrumenter="otel"(marked as internal-only) and monkey-patches SDK internals.Proposed fix
SentrySpanProcessorshould manage the Profile lifecycle when creating transactions from OTel spans. Specifically,on_startshould callprofile.start()andon_endshould callprofile.stop()beforetransaction.finish().Note: the abandoned 3.0.0a1 branch (https://github.com/getsentry/sentry-python/blob/3.0.0a1/sentry_sdk/opentelemetry/span_processor.py) had this fixed - it called
profile.__enter__()/__exit__()directly. This fix was lost when the POTel-based 3.0 was #4955.Environment
Question about instrumenter parameter future
Our workaround relies on
instrumenter="otel", which is https://github.com/getsentry/sentry-python/blob/master/sentry_sdk/consts.py with a note that it will be removed in the next major version. However, we noticed it was only actually removed in the https://github.com/getsentry/sentry-python/blob/3.0.0a1/sentry_sdk/consts.py and remains on master.Is
instrumenterstill planned for removal? If so, what would be the recommended path for users who need both OTel tracing and Sentry profiling? The currentOTLPIntegrationpath has no way to trigger the profiler since it bypassesscope.start_transaction()entirely.