ouroboros-consensus-0.18.0.0: Consensus layer for the Ouroboros blockchain protocol
Safe HaskellSafe-Inferred
LanguageHaskell2010

Ouroboros.Consensus.Mempool.API

Description

Exposes the Mempool datatype which captures the public API of the Mempool. Also exposes all the types used to interact with said API.

The interface is then initialized in Ouroboros.Consensus.Mempool.Init with the functions from Ouroboros.Consensus.Mempool.Update and Ouroboros.Consensus.Mempool.Query.

Synopsis

Mempool

data Mempool m blk Source #

Mempool

The mempool is the set of transactions that should be included in the next block. In principle this is a set of all the transactions that we receive from our peers. In order to avoid flooding the network with invalid transactions, however, we only want to keep valid transactions in the mempool. That raises the question: valid with respect to which ledger state?

We opt for a very simple answer to this: the mempool will be interpreted as a list of transactions; which are validated strictly in order, starting from the current ledger state. This has a number of advantages:

  • It's simple to implement and it's efficient. In particular, no search for a valid subset is ever required.
  • When producing a block, we can simply take the longest possible prefix of transactions that fits in a block.
  • It supports wallets that submit dependent transactions (where later transaction depends on outputs from earlier ones).

The mempool provides fairness guarantees for the case of multiple threads performing addTx concurrently. Implementations of this interface must provide this guarantee, and users of this interface may rely on it. Specifically, multiple threads that continuously use addTx will, over time, get a share of the mempool resource (measured by the number of txs only, not their sizes) roughly proportional to their "weight". The weight depends on the AddTxOnBehalfOf: either acting on behalf of remote peers (AddTxForRemotePeer) or on behalf of a local client (AddTxForLocalClient). The weighting for threads acting on behalf of remote peers is the same for all remote peers, so all remote peers will get a roughly equal share of the resource. The weighting for local clients is the same for all local clients but may be higher than the weighting for remote peers. The weighting is not unboundedly higher however, so there is still (weighted) fairness between remote peers and local clients. Thus local clients will also get a roughly equal share of the resource, but that share may be strictly greater than the share for each remote peer. Furthermore, this implies local clients cannot starve remote peers, despite their higher weighting.

This fairness specification in terms of weighting is deliberately non-specific, which allows multiple strategies. The existing default strategy (for the implementation in Ouroboros.Consensus.Mempool) is as follows. The design uses two FIFOs, to give strictly in-order behaviour. All remote peers get equal weight and all local clients get equal weight. The relative weight between remote and local is that if there are N remote peers and M local clients, each local client gets weight 1/(M+1), while all of the N remote peers together also get total weight 1/(M+1). This means individual remote peers get weight 1/(N * (M+1)). Intuitively: a single local client has the same weight as all the remote peers put together.

Constructors

Mempool 

Fields

  • addTxAddTxOnBehalfOfGenTx blk → m (MempoolAddTxResult blk)

    Add a single transaction to the mempool.

    The new transaction provided will be validated, in order, against the ledger state obtained by applying all the transactions already in the Mempool to it. Transactions which are found to be invalid, with respect to the ledger state, are dropped, whereas valid transactions are added to the mempool.

    Note that transactions that are invalid, with respect to the ledger state, will never be added to the mempool. However, it is possible that, at a given point in time, transactions which were once valid but are now invalid, with respect to the current ledger state, could exist within the mempool until they are revalidated and dropped from the mempool via a call to syncWithLedger or by the background thread that watches the ledger for changes.

    This action returns one of two results

    • A MempoolTxAdded value if the transaction provided was found to be valid. This transactions is now in the Mempool.
    • A MempoolTxRejected value if the transaction provided was found to be invalid, along with its accompanying validation errors. This transactions is not in the Mempool.

    Note that this is a blocking action. It will block until the transaction fits into the mempool. This includes transactions that turn out to be invalid: the action waits for there to be space for the transaction before it gets validated.

    Note that it is safe to use this from multiple threads concurrently.

    POSTCONDITION: > let prj = case > MempoolTxAdded vtx -> txForgetValidated vtx > MempoolTxRejected tx _err -> tx > processed <- addTx wti txs > prj processed == tx

    Note that previously valid transaction that are now invalid with respect to the current ledger state are dropped from the mempool, but are not part of the first returned list (nor the second).

    In principle it is possible that validation errors are transient; for example, it is possible that a transaction is rejected because one of its inputs is not yet available in the UTxO (the transaction it depends on is not yet in the chain, nor in the mempool). In practice however it is likely that rejected transactions will still be rejected later, and should just be dropped.

    It is important to note one important special case of transactions being "invalid": a transaction will also be considered invalid if that very same transaction is already included on the blockchain (after all, by definition that must mean its inputs have been used). Rejected transactions are therefore not necessarily a sign of malicious behaviour. Indeed, we would expect most transactions that are reported as invalid by tryAddTxs to be invalid precisely because they have already been included. Distinguishing between these two cases can be done in theory, but it is expensive unless we have an index of transaction hashes that have been included on the blockchain.

    As long as we keep the mempool entirely in-memory this could live in STM m; we keep it in m instead to leave open the possibility of persistence.

  • removeTxs ∷ [GenTxId blk] → m ()

    Manually remove the given transactions from the mempool.

  • syncWithLedger ∷ m (MempoolSnapshot blk)

    Sync the transactions in the mempool with the current ledger state of the ChainDB.

    The transactions that exist within the mempool will be revalidated against the current ledger state. Transactions which are found to be invalid with respect to the current ledger state, will be dropped from the mempool, whereas valid transactions will remain.

    We keep this in m instead of STM m to leave open the possibility of persistence. Additionally, this makes it possible to trace the removal of invalid transactions.

    n.b. in our current implementation, when one opens a mempool, we spawn a thread which performs this action whenever the ChainDB tip point changes.

  • getSnapshotSTM m (MempoolSnapshot blk)

    Get a snapshot of the current mempool state. This allows for further pure queries on the snapshot.

    This doesn't look at the ledger state at all.

  • getSnapshotForForgeLedgerState blk → STM m (MempoolSnapshot blk)

    Get a snapshot of the mempool state that is valid with respect to the given ledger state

    This does not update the state of the mempool.

  • getCapacitySTM m MempoolCapacityBytes

    Get the mempool's capacity in bytes.

    Note that the capacity of the Mempool, unless it is overridden with MempoolCapacityBytesOverride, can dynamically change when the ledger state is updated: it will be set to twice the current ledger's maximum transaction capacity of a block.

    When the capacity happens to shrink at some point, we do not remove transactions from the Mempool to satisfy this new lower limit. Instead, we treat it the same way as a Mempool which is at capacity, i.e., we won't admit new transactions until some have been removed because they have become invalid.

  • getTxSizeGenTx blk → TxSizeInBytes

    Return the post-serialisation size in bytes of a GenTx.

Transaction adding

data AddTxOnBehalfOf Source #

Who are we adding a tx on behalf of, a remote peer or a local client?

This affects two things:

  1. how certain errors are treated: we want to be helpful to local clients.
  2. priority of service: local clients are prioritised over remote peers.

See Mempool for a discussion of fairness and priority.

data MempoolAddTxResult blk Source #

The result of attempting to add a transaction to the mempool.

Constructors

MempoolTxAdded !(Validated (GenTx blk))

The transaction was added to the mempool.

MempoolTxRejected !(GenTx blk) !(ApplyTxErr blk)

The transaction was rejected and could not be added to the mempool for the specified reason.

Instances

Instances details
(Show (GenTx blk), Show (Validated (GenTx blk)), Show (ApplyTxErr blk)) ⇒ Show (MempoolAddTxResult blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.API

(Eq (GenTx blk), Eq (Validated (GenTx blk)), Eq (ApplyTxErr blk)) ⇒ Eq (MempoolAddTxResult blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.API

addLocalTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk)) Source #

A wrapper around addTx that adds a sequence of transactions on behalf of a local client. This reports more errors for local clients, see Intervene.

Note that transactions are added one by one, and can interleave with other concurrent threads using addTx.

See addTx for further details.

addTxs ∷ ∀ m blk t. (MonadSTM m, Traversable t) ⇒ Mempool m blk → t (GenTx blk) → m (t (MempoolAddTxResult blk)) Source #

A wrapper around addTx that adds a sequence of transactions on behalf of a remote peer.

Note that transactions are added one by one, and can interleave with other concurrent threads using addTx.

See addTx for further details.

Ledger state to forge on top of

data ForgeLedgerState blk Source #

The ledger state wrt to which we should produce a block

The transactions in the mempool will be part of the body of a block, but a block consists of a header and a body, and the full validation of a block consists of first processing its header and only then processing the body. This is important, because processing the header may change the state of the ledger: the update system might be updated, scheduled delegations might be applied, etc., and such changes should take effect before we validate any transactions.

Constructors

ForgeInKnownSlot SlotNo (TickedLedgerState blk)

The slot number of the block is known

This will only be the case when we realized that we are the slot leader and we are actually producing a block. It is the caller's responsibility to call applyChainTick and produce the ticked ledger state.

ForgeInUnknownSlot (LedgerState blk)

The slot number of the block is not yet known

When we are validating transactions before we know in which block they will end up, we have to make an assumption about which slot number to use for applyChainTick to prepare the ledger state; we will assume that they will end up in the slot after the slot at the tip of the ledger.

Mempool Snapshot

data MempoolSnapshot blk Source #

A pure snapshot of the contents of the mempool. It allows fetching information about transactions in the mempool, and fetching individual transactions.

This uses a transaction sequence number type for identifying transactions within the mempool sequence. The sequence number is local to this mempool, unlike the transaction hash. This allows us to ask for all transactions after a known sequence number, to get new transactions. It is also used to look up individual transactions.

Note that it is expected that getTx will often return Nothing even for tx sequence numbers returned in previous snapshots. This happens when the transaction has been removed from the mempool between snapshots.

Constructors

MempoolSnapshot 

Fields

Re-exports

data TicketNo Source #

We allocate each transaction a (monotonically increasing) ticket number as it enters the mempool.

Instances

Instances details
Bounded TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

Enum TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

Show TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

Methods

showsPrecIntTicketNoShowS #

showTicketNoString #

showList ∷ [TicketNo] → ShowS #

Eq TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

Methods

(==)TicketNoTicketNoBool #

(/=)TicketNoTicketNoBool #

Ord TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

NoThunks TicketNo Source # 
Instance details

Defined in Ouroboros.Consensus.Mempool.TxSeq

type TxSizeInBytes = Word32 Source #

Transactions are typically not big, but in principle in future we could have ones over 64k large.

zeroTicketNoTicketNo Source #

The transaction ticket number from which our counter starts.