-
-
Notifications
You must be signed in to change notification settings - Fork 204
fix(exporter): reduce mutex hold time in process pending events #5134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 3 commits
382e2a2
6daa6eb
67a4e54
b0419e5
5ba1427
6b3302e
7adf54e
3c02665
56b926f
a1e45cd
d08e46d
8bd2fae
5181041
04dc8a1
654bb9a
19cbcb3
b20ade2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -91,21 +91,21 @@ func (e *eventStoreImpl[T]) AddConsumer(consumerID string) { | |
| func (e *eventStoreImpl[T]) ProcessPendingEvents( | ||
| consumerID string, processEventsFunc func(context.Context, []T) error) error { | ||
| e.mutex.Lock() | ||
| defer e.mutex.Unlock() | ||
|
|
||
| eventList, err := e.fetchPendingEvents(consumerID) | ||
| e.mutex.Unlock() | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| err = processEventsFunc(context.Background(), eventList.Events) | ||
|
thomaspoignant marked this conversation as resolved.
Outdated
thomaspoignant marked this conversation as resolved.
Outdated
|
||
| if err != nil { | ||
| return err | ||
| } | ||
|
thomaspoignant marked this conversation as resolved.
Outdated
|
||
|
|
||
| e.mutex.Lock() | ||
| err = e.updateConsumerOffset(consumerID, eventList.NewOffset) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| return nil | ||
| e.mutex.Unlock() | ||
| return err | ||
| } | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The change to release the global mutex before calling While the addition of |
||
|
|
||
| // GetTotalEventCount returns the total number of events in the store. | ||
|
|
@@ -177,7 +177,7 @@ func (e *eventStoreImpl[T]) updateConsumerOffset(consumerID string, offset int64 | |
| if err != nil { | ||
| return err | ||
| } | ||
| currentConsumer.Offset = e.lastOffset | ||
| currentConsumer.Offset = offset | ||
| return nil | ||
| } | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
flushMumutex indataExporterImplis redundant. TheeventStore.ProcessPendingEventsmethod now implements internal per-consumer synchronization usingcurrentConsumer.mutex. Since eachDataExporterinstance is associated with a specificconsumerID, the event store already ensures that concurrent flushes for the same consumer are serialized. Removing this lock reduces unnecessary overhead.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hugehoo as Gemini is saying this mutex is not needed here.
Can you guide me why you want to add one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added flushMu as a defensive guard because, ProcessPendingEvents() released the store lock before export and only updated the consumer offset afterward. That made concurrent Flush() calls for the same consumer look risky, so I serialized them at the DataExporter layer.
but after saw what Gemini said, I agree this is not the right place to keep that guarantee. The atomicity should live in EventStore, not in DataExporter.