Safe Haskell | Safe-Inferred |
---|---|
Language | Haskell2010 |
Exposes the
datatype which captures the public API of the
Mempool. Also exposes all the types used to interact with said API.Mempool
The interface is then initialized in Ouroboros.Consensus.Mempool.Init with the functions from Ouroboros.Consensus.Mempool.Update and Ouroboros.Consensus.Mempool.Query.
Synopsis
- data Mempool m blk = Mempool {
- addTx ∷ AddTxOnBehalfOf → GenTx blk → m (MempoolAddTxResult blk)
- removeTxs ∷ [GenTxId blk] → m ()
- syncWithLedger ∷ m (MempoolSnapshot blk)
- getSnapshot ∷ STM m (MempoolSnapshot blk)
- getSnapshotFor ∷ ForgeLedgerState blk → STM m (MempoolSnapshot blk)
- getCapacity ∷ STM m (TxMeasure blk)
- data AddTxOnBehalfOf
- data MempoolAddTxResult blk
- = MempoolTxAdded !(Validated (GenTx blk))
- | MempoolTxRejected !(GenTx blk) !(ApplyTxErr blk)
- addLocalTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk))
- addTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk))
- isMempoolTxAdded ∷ MempoolAddTxResult blk → Bool
- isMempoolTxRejected ∷ MempoolAddTxResult blk → Bool
- mempoolTxAddedToMaybe ∷ MempoolAddTxResult blk → Maybe (Validated (GenTx blk))
- data ForgeLedgerState blk
- = ForgeInKnownSlot SlotNo (TickedLedgerState blk)
- | ForgeInUnknownSlot (LedgerState blk)
- data MempoolSnapshot blk = MempoolSnapshot {
- snapshotTxs ∷ [(Validated (GenTx blk), TicketNo, ByteSize32)]
- snapshotTxsAfter ∷ TicketNo → [(Validated (GenTx blk), TicketNo, ByteSize32)]
- snapshotTake ∷ TxMeasure blk → [Validated (GenTx blk)]
- snapshotLookupTx ∷ TicketNo → Maybe (Validated (GenTx blk))
- snapshotHasTx ∷ GenTxId blk → Bool
- snapshotMempoolSize ∷ MempoolSize
- snapshotSlotNo ∷ SlotNo
- snapshotLedgerState ∷ TickedLedgerState blk
- data SizeInBytes
- data TicketNo
- zeroTicketNo ∷ TicketNo
Mempool
Mempool
The mempool is the set of transactions that should be included in the next block. In principle this is a set of all the transactions that we receive from our peers. In order to avoid flooding the network with invalid transactions, however, we only want to keep valid transactions in the mempool. That raises the question: valid with respect to which ledger state?
We opt for a very simple answer to this: the mempool will be interpreted as a list of transactions; which are validated strictly in order, starting from the current ledger state. This has a number of advantages:
- It's simple to implement and it's efficient. In particular, no search for a valid subset is ever required.
- When producing a block, we can simply take the longest possible prefix of transactions that fits in a block.
- It supports wallets that submit dependent transactions (where later transaction depends on outputs from earlier ones).
The mempool provides fairness guarantees for the case of multiple threads
performing addTx
concurrently. Implementations of this interface must
provide this guarantee, and users of this interface may rely on it.
Specifically, multiple threads that continuously use addTx
will, over
time, get a share of the mempool resource (measured by the number of txs
only, not their sizes) roughly proportional to their "weight". The weight
depends on the AddTxOnBehalfOf
: either acting on behalf of remote peers
(AddTxForRemotePeer
) or on behalf of a local client
(AddTxForLocalClient
). The weighting for threads acting on behalf of
remote peers is the same for all remote peers, so all remote peers will get
a roughly equal share of the resource. The weighting for local clients is
the same for all local clients but may be higher than the weighting for
remote peers. The weighting is not unboundedly higher however, so there is
still (weighted) fairness between remote peers and local clients. Thus
local clients will also get a roughly equal share of the resource, but that
share may be strictly greater than the share for each remote peer.
Furthermore, this implies local clients cannot starve remote peers, despite
their higher weighting.
This fairness specification in terms of weighting is deliberately non-specific, which allows multiple strategies. The existing default strategy (for the implementation in Ouroboros.Consensus.Mempool) is as follows. The design uses two FIFOs, to give strictly in-order behaviour. All remote peers get equal weight and all local clients get equal weight. The relative weight between remote and local is that if there are N remote peers and M local clients, each local client gets weight 1/(M+1), while all of the N remote peers together also get total weight 1/(M+1). This means individual remote peers get weight 1/(N * (M+1)). Intuitively: a single local client has the same weight as all the remote peers put together.
Mempool | |
|
Transaction adding
data AddTxOnBehalfOf Source #
Who are we adding a tx on behalf of, a remote peer or a local client?
This affects two things:
- how certain errors are treated: we want to be helpful to local clients.
- priority of service: local clients are prioritised over remote peers.
See Mempool
for a discussion of fairness and priority.
data MempoolAddTxResult blk Source #
The result of attempting to add a transaction to the mempool.
MempoolTxAdded !(Validated (GenTx blk)) | The transaction was added to the mempool. |
MempoolTxRejected !(GenTx blk) !(ApplyTxErr blk) | The transaction was rejected and could not be added to the mempool for the specified reason. |
Instances
(Show (GenTx blk), Show (Validated (GenTx blk)), Show (ApplyTxErr blk)) ⇒ Show (MempoolAddTxResult blk) Source # | |
Defined in Ouroboros.Consensus.Mempool.API showsPrec ∷ Int → MempoolAddTxResult blk → ShowS # show ∷ MempoolAddTxResult blk → String # showList ∷ [MempoolAddTxResult blk] → ShowS # | |
(Eq (GenTx blk), Eq (Validated (GenTx blk)), Eq (ApplyTxErr blk)) ⇒ Eq (MempoolAddTxResult blk) Source # | |
Defined in Ouroboros.Consensus.Mempool.API (==) ∷ MempoolAddTxResult blk → MempoolAddTxResult blk → Bool # (/=) ∷ MempoolAddTxResult blk → MempoolAddTxResult blk → Bool # |
addLocalTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk)) Source #
addTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk)) Source #
isMempoolTxAdded ∷ MempoolAddTxResult blk → Bool Source #
isMempoolTxRejected ∷ MempoolAddTxResult blk → Bool Source #
mempoolTxAddedToMaybe ∷ MempoolAddTxResult blk → Maybe (Validated (GenTx blk)) Source #
Ledger state to forge on top of
data ForgeLedgerState blk Source #
The ledger state wrt to which we should produce a block
The transactions in the mempool will be part of the body of a block, but a block consists of a header and a body, and the full validation of a block consists of first processing its header and only then processing the body. This is important, because processing the header may change the state of the ledger: the update system might be updated, scheduled delegations might be applied, etc., and such changes should take effect before we validate any transactions.
ForgeInKnownSlot SlotNo (TickedLedgerState blk) | The slot number of the block is known This will only be the case when we realized that we are the slot leader
and we are actually producing a block. It is the caller's responsibility
to call |
ForgeInUnknownSlot (LedgerState blk) | The slot number of the block is not yet known When we are validating transactions before we know in which block they
will end up, we have to make an assumption about which slot number to use
for |
Mempool Snapshot
data MempoolSnapshot blk Source #
A pure snapshot of the contents of the mempool. It allows fetching information about transactions in the mempool, and fetching individual transactions.
This uses a transaction sequence number type for identifying transactions within the mempool sequence. The sequence number is local to this mempool, unlike the transaction hash. This allows us to ask for all transactions after a known sequence number, to get new transactions. It is also used to look up individual transactions.
Note that it is expected that getTx
will often return Nothing
even for tx sequence numbers returned in previous snapshots. This happens
when the transaction has been removed from the mempool between snapshots.
MempoolSnapshot | |
|
Re-exports
data SizeInBytes Source #
Instances
We allocate each transaction a (monotonically increasing) ticket number as it enters the mempool.
Instances
Bounded TicketNo Source # | |
Enum TicketNo Source # | |
Show TicketNo Source # | |
Eq TicketNo Source # | |
Ord TicketNo Source # | |
Defined in Ouroboros.Consensus.Mempool.TxSeq | |
NoThunks TicketNo Source # | |
zeroTicketNo ∷ TicketNo Source #
The transaction ticket number from which our counter starts.