Conversation
PR SummaryMedium Risk Overview Refactors reconciliation to be more structured: stream backlog reading is split into Adjusts PoA service state tracking so Written by Cursor Bugbot for commit 85cb842. This will update automatically on new commits. Configure here. |
@Voxelot added these. I think he can give more context. |
Yes, should be safe: fn calculate_quorum(redis_nodes_len: usize, quorum_disruption_budget: u32) -> usize {
let majority = redis_nodes_len
.checked_div(2)
.unwrap_or(0)
.saturating_add(1);
let disruption_budget = usize::try_from(quorum_disruption_budget).unwrap_or(0);
majority
.saturating_add(disruption_budget)
.min(redis_nodes_len)
} |
The idea was we have pagination to minimize this time... but might not be enough as block size gets bigger? |
Added documentation to the adapter here: eedec99 Happy to add more. |
I can confirm using these metrics when debugging the failover behavior in devnet and that they worked. I'm not sure what is meant by building a table? Like writing to rocksdb as well as metrics? |
| Self { | ||
| redis_nodes: self.redis_nodes.clone(), | ||
| quorum: self.quorum, | ||
| quorum_disruption_budget: self.quorum_disruption_budget, |
There was a problem hiding this comment.
i think we might be using this setting to ensure we have more than n/2 + 1 threshold in some situations. Will it still be possible to require n/2 + m without this quorum disruption budget setting?
This is partially mitigated by using local block height as a cursor. However, if there are many conflicting blocks in redis, agree that it could be potentially lighter weight to only fetch and compare block ids first to determine the canonical path, before fetching the actual data. In that case we could also limit the data fetch to be from just one redis node, instead of fetching multiple copies of the data from all of them at once. |
Linked Issues/PRs
Followup to feedback in #3241
Description
Checklist
Before requesting review
After merging, notify other teams
[Add or remove entries as needed]