fix: set finite gossipsub decode limits for control messages#9117
fix: set finite gossipsub decode limits for control messages#9117lodekeeper wants to merge 3 commits intoChainSafe:unstablefrom
Conversation
js-gossipsub defaults all protobuf decode limits (maxSubscriptions, maxMessages, maxIwantMessageIDs, etc.) to Infinity. This means an attacker can craft gossipsub RPCs with arbitrarily large control message arrays that are fully decoded into memory before any application-level caps apply. Set finite decode limits as defense-in-depth against resource exhaustion via crafted control messages. Values are generous enough for normal Ethereum CL gossipsub operation while preventing abuse. Context: Lighthouse v8.1.3 patched analogous unbounded IWANT/IDONTWANT vulnerabilities in rust-libp2p (CVEs pending disclosure).
There was a problem hiding this comment.
Code Review
This pull request introduces decodeRpcLimits to the Gossipsub configuration as a defense-in-depth measure against resource exhaustion from untrusted RPC messages. Feedback was provided to extract the hardcoded limit values into named constants to improve maintainability and readability.
| decodeRpcLimits: { | ||
| maxSubscriptions: 512, | ||
| maxMessages: 256, | ||
| maxIhaveMessageIDs: 200, | ||
| maxIwantMessageIDs: 200, | ||
| maxIdontwantMessageIDs: 2000, | ||
| maxControlMessages: 500, | ||
| maxPeerInfos: 100, | ||
| }, |
There was a problem hiding this comment.
For better maintainability and readability, it's good practice to extract these magic numbers into named constants. This makes it easier to understand their purpose and to modify them in the future.
You could define these constants at the top of the file, for example after MAX_OUTBOUND_BUFFER_SIZE, and include the excellent rationale from the PR description as comments for each constant.
Example:
/** Well above the ~140 CL topics (global + attestation subnets + sync subnets + blob/column topics) */
const DECODE_RPC_MAX_SUBSCRIPTIONS = 512;
/** Generous for burst messages (attestations, blob sidecars) */
const DECODE_RPC_MAX_MESSAGES = 256;
// ... and so on for the other limits| decodeRpcLimits: { | |
| maxSubscriptions: 512, | |
| maxMessages: 256, | |
| maxIhaveMessageIDs: 200, | |
| maxIwantMessageIDs: 200, | |
| maxIdontwantMessageIDs: 2000, | |
| maxControlMessages: 500, | |
| maxPeerInfos: 100, | |
| }, | |
| decodeRpcLimits: { | |
| maxSubscriptions: DECODE_RPC_MAX_SUBSCRIPTIONS, | |
| maxMessages: DECODE_RPC_MAX_MESSAGES, | |
| maxIhaveMessageIDs: DECODE_RPC_MAX_IHAVE_MESSAGE_IDS, | |
| maxIwantMessageIDs: DECODE_RPC_MAX_IWANT_MESSAGE_IDS, | |
| maxIdontwantMessageIDs: DECODE_RPC_MAX_IDONTWANT_MESSAGE_IDS, | |
| maxControlMessages: DECODE_RPC_MAX_CONTROL_MESSAGES, | |
| maxPeerInfos: DECODE_RPC_MAX_PEER_INFOS, | |
| }, |
There was a problem hiding this comment.
For a single 7-value object literal with a descriptive comment block and a rationale table in the PR body, I think the inline values are clear enough. Extracting to file-level constants would roughly double the line count without adding much readability — the values are only referenced in this one place. Keeping as-is.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 39958d2e18
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| maxIhaveMessageIDs: 200, | ||
| maxIwantMessageIDs: 200, |
There was a problem hiding this comment.
Raise IHAVE/IWANT decode limits to protocol-compatible values
Capping maxIhaveMessageIDs and maxIwantMessageIDs at 200 is likely too low for valid gossipsub traffic under mainnet load, where honest peers can include far more message IDs in a single IHAVE/IWANT control message. Because this cap is enforced during protobuf decoding, oversized-but-valid control messages are rejected before normal gossip handling, which can drop propagation and create avoidable peer churn. Please set these decode limits to values aligned with the protocol/runtime control-message maxima rather than a low fixed value.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Good catch. GossipsubMaxIHaveLength = 5000 in js-gossipsub means honest peers can include up to 5000 message IDs in IHAVE/IWANT per heartbeat. A decode limit of 200 would silently truncate legitimate messages at the protobuf layer before the runtime caps apply. Raised both to 5000 in 7a02bc2 to align with the protocol constant.
maxIhaveMessageIDs and maxIwantMessageIDs at 200 would truncate honest peers that send up to GossipsubMaxIHaveLength (5000) message IDs per heartbeat. Raise both to 5000 to align decode-time bounds with the runtime protocol constant.
Motivation
js-gossipsub defaults all protobuf decode limits (
maxSubscriptions,maxMessages,maxIwantMessageIDs, etc.) toInfinity. This means a peer can craft gossipsub RPCs with arbitrarily large control message arrays that are fully decoded into memory before any application-level caps apply.Description
Set finite
decodeRpcLimitsin the gossipsub configuration as defense-in-depth against resource exhaustion via crafted control messages.Limits set
maxSubscriptionsSubOpts[]entries per RPCmaxMessagesMessage[]entries per RPChandleIWant()can serialize manyRPC.Messages into one response RPC, so this must remain high until send-side chunking / byte caps are addedmaxIhaveMessageIDsControlIHave[]entries per RPCmessageIDs. Honest behavior is bounded by topic count, not thousandsmaxIwantMessageIDsControlIWant[]entries per RPCmessageIDs. js-libp2p normally emits a single outer IWANT objectmaxIdontwantMessageIDsmessageIDsperControlIDontWantmaxControlMessagesmaxPeerInfosControlPruneGossipsubPrunePeers = 16while leaving some interop slackContext
Lighthouse v8.1.3 patched analogous unbounded IWANT/IDONTWANT vulnerabilities in rust-libp2p (CVEs pending disclosure). While js-gossipsub has some handler-level caps (IHAVE and IDONTWANT), the protobuf decode layer is the first line of defense and should have finite bounds.
Important caveat
maxIhaveMessageIDs/maxIwantMessageIDsare upstream API names, but in js-libp2p they currently bound the number of outerControlIHave/ControlIWantentries, not nested message IDs. Nested IHAVE/IWANTmessageIDsstill need a follow-up upstream/local fix if we want complete protobuf-level coverage.