ouroboros-consensus
Safe HaskellNone
LanguageHaskell2010

Ouroboros.Consensus.Storage.LedgerDB.API

Description

The Ledger DB is responsible for the following tasks:

  • Maintaining the in-memory ledger state at the tip: When we try to extend our chain with a new block fitting onto our tip, the block must first be validated using the right ledger state, i.e., the ledger state corresponding to the tip.
  • Maintaining the past \(k\) in-memory ledger states: we might roll back up to \(k\) blocks when switching to a more preferable fork. Consider the example below:

    Our current chain's tip is \(C_2\), but the fork containing blocks \(F_1\), \(F_2\), and \(F_3\) is more preferable. We roll back our chain to the intersection point of the two chains, \(I\), which must be not more than \(k\) blocks back from our current tip. Next, we must validate block \(F_1\) using the ledger state at block \(I\), after which we can validate \(F_2\) using the resulting ledger state, and so on.

    This means that we need access to all ledger states of the past \(k\) blocks, i.e., the ledger states corresponding to the volatile part of the current chain. Note that applying a block to a ledger state is not an invertible operation, so it is not possible to simply unapply \(C_1\) and \(C_2\) to obtain \(I\).

    Access to the last \(k\) ledger states is not only needed for validating candidate chains, but also by the:

    • Local state query server: To query any of the past \(k\) ledger states.
    • Chain sync client: To validate headers of a chain that intersects with any of the past \(k\) blocks.
  • Providing LedgerTables at any of the last \(k\) ledger states: To apply blocks or transactions on top of ledger states, the LedgerDB must be able to provide the appropriate ledger tables at any of those ledger states.
  • Storing snapshots on disk: To obtain a ledger state for the current tip of the chain, one has to apply all blocks in the chain one-by-one to the initial ledger state. When starting up the system with an on-disk chain containing millions of blocks, all of them would have to be read from disk and applied. This process can take hours, depending on the storage and CPU speed, and is thus too costly to perform on each startup.

    For this reason, a recent snapshot of the ledger state should be periodically written to disk. Upon the next startup, that snapshot can be read and used to restore the current ledger state, as well as the past \(k\) ledger states.

  • Flushing LedgerTable differences: The running Consensus has to periodically flush chunks of differences from the DbChangelog to the BackingStore, so that memory is off-loaded to the backing store, and if the backing store is an on-disk implementation, reduce the memory usage.

Note that whenever we say ledger state we mean the ExtLedgerState blk mk type described in Ouroboros.Consensus.Ledger.Basics.

Resource management in the LedgerDB

The LedgerDB has currently 3 backends it can use:

  • InMemory: This backend is pure except for tracing. No resources are allocated.
  • LMDB: This backend allocates a BackingStore and BackingStoreValueHandles on it. The BackingStore does not necessarily need to be closed as described in the LMDB documentation:
Closing a database handle is not necessary, but lets mdb_dbi_open() reuse the handle value.

Therefore, the BackingStore is not allocated in any resource or tracked in any way as a resource.

For the value handles, all the usages of those are bracketed or tracked in a resource registry, so they will be closed individually when an exception arrives. The key difference is that in V1, the value handle cannot outlive the Forker (see "Forker management in the running node" below), while in V2 the resources (handles) can outlive the forkers and be moved to the LedgerDB.

  • LSM: This backend allocates a BlockIOFS and a Session. Using the session, new Tables are allocated, but closing the session closes any existing Tables handles.

Both the BlockIOFS and the Session are stored in the ldbResources of the LedgerDB, and closing the LedgerDB will release them. The LedgerDB will be closed by closing the ChainDB which is tracked in the top-level registry. Therefore we don't need to keep track of the Table handles nor we need to further keep track of the BlockIOFS and the Session.

Forker management in the running node

The openForkerAtTarget method of the LedgerDB type is the lowest-level method for opening a Forker. This comment describes the tree formed by definition-and-use edges rooted at openForkerAtTarget. It doesn't describe the entire tree, but rather just enough to confirm that Forkers will not be leaked during the extended execution of the running node. There are a few helpful clarifications to make before elaborating that tree.

  • This comment is only concerned with the running node. In contrast, the shutting down node is addressed by the "Resource management in the LedgerDB" section above. There are two key ideas. First, the shut down routines don't open additional Forkers. Second, it's OK for the shut down routines to not necessarily close their local/owned Forkers, since the Forker backends either don't require that or already take care of open handles when their top-level "close" method is called.
  • Some subtrees, like the withTipForker subtree, are irrelevant to this comment, because they explicitly use bracket or a short-lived ResourceRegstiry and so can't contribute to any leaks. To clarify: there would be leaks if those brackets were indefinitely nested or if the registry outlived multiple iterations, etc. But that itself would be an unacceptable stack leak, for example. Such unbounded nesting/registries are therefore beyond the scope of this comment.
  • Tools (like db-analyser) and tests are also beyond the scope of this comment, so those subtrees are mentioned but not elaborated.
  • It turns out that the resulting tree is currently merely a list of such uninteresting subtrees, so the nub of this comment can be linear despite describing a tree.

At the time of writing, the (linear spine of the) def-use tree is as follows.

  • openForkerAtTarget is used directly only to define withTipForker (bracketed) and openReadOnlyForker.
  • openReadOnlyForker is used directly only in db-analyser (tool), in tests, and to define openReadOnlyForkerAtPoint.
  • openReadOnlyForkerAtPoint is used directly only to define allocInRegistryReadOnlyForkerAtPoint (registered), to define withReadOnlyForkerAtPoint (bracketed), and to construct a MempoolLedgerDBView.
  • The Forker part of MempoolLedgerDBView is used directly only in initMempoolEnv (called by openMempool) and in 'implSyncWithLedger. If an exception arrives during openMempool, then node will shutdown, so leaks are not a concern here. The syncing-with-ledger use is bracketed via modifyMVar_ to close the old forker just before replacing it with the new one.

(image code)

Expand
>>> import Image.LaTeX.Render
>>> import Control.Monad
>>> import System.Directory
>>> 
>>> createDirectoryIfMissing True "docs/haddocks/"
>>> :{
>>> either (error . show) pure =<<
>>>  renderToFile "docs/haddocks/ledgerdb-switch.svg" defaultEnv (tikz ["positioning", "arrows"]) "\
>>> \ \\draw (0, 0) -- (50pt, 0) coordinate (I);\
>>> \  \\draw (I) -- ++(20pt,  20pt) coordinate (C1) -- ++(20pt, 0) coordinate (C2);\
>>> \  \\draw (I) -- ++(20pt, -20pt) coordinate (F1) -- ++(20pt, 0) coordinate (F2) -- ++(20pt, 0) coordinate (F3);\
>>> \  \\node at (I)  {$\\bullet$};\
>>> \  \\node at (C1) {$\\bullet$};\
>>> \  \\node at (C2) {$\\bullet$};\
>>> \  \\node at (F1) {$\\bullet$};\
>>> \  \\node at (F2) {$\\bullet$};\
>>> \  \\node at (F3) {$\\bullet$};\
>>> \  \\node at (I) [above left] {$I$};\
>>> \  \\node at (C1) [above] {$C_1$};\
>>> \  \\node at (C2) [above] {$C_2$};\
>>> \  \\node at (F1) [below] {$F_1$};\
>>> \  \\node at (F2) [below] {$F_2$};\
>>> \  \\node at (F3) [below] {$F_3$};\
>>> \  \\draw (60pt, 50pt) node {$\\overbrace{\\hspace{60pt}}$};\
>>> \  \\draw (60pt, 60pt) node[fill=white] {$k$};\
>>> \  \\draw [dashed] (30pt, -40pt) -- (30pt, 45pt);"
>>> :}
Synopsis

Main API

class CanUpgradeLedgerTables (l ∷ MapKindType) where Source #

When pushing differences on InMemory Ledger DBs, we will sometimes need to update ledger tables to the latest era. For unary blocks this is a no-op, but for the Cardano block, we will need to upgrade all TxOuts in memory.

No correctness property relies on this, as Consensus can work with TxOuts from multiple eras, but the performance depends on it as otherwise we will be upgrading the TxOuts every time we consult them.

Methods

upgradeTables Source #

Arguments

∷ ∀ (mk1 ∷ MapKind) (mk2 ∷ MapKind). l mk1

The original ledger state before the upgrade. This will be the tip before applying the block.

→ l mk2

The ledger state after the upgrade, which might be in a different era than the one above.

LedgerTables l ValuesMK

The tables we want to maybe upgrade.

LedgerTables l ValuesMK 

Instances

Instances details
(CanHardFork xs, HasHardForkTxOut xs) ⇒ CanUpgradeLedgerTables (LedgerState (HardForkBlock xs)) Source # 
Instance details

Defined in Ouroboros.Consensus.HardFork.Combinator.Ledger

CanUpgradeLedgerTables (LedgerState (DualBlock m a)) Source # 
Instance details

Defined in Ouroboros.Consensus.Ledger.Dual

CanUpgradeLedgerTables (LedgerState blk) ⇒ CanUpgradeLedgerTables (ExtLedgerState blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

upgradeTables ∷ ∀ (mk1 ∷ MapKind) (mk2 ∷ MapKind). ExtLedgerState blk mk1 → ExtLedgerState blk mk2 → LedgerTables (ExtLedgerState blk) ValuesMKLedgerTables (ExtLedgerState blk) ValuesMK Source #

LedgerTablesAreTrivial l ⇒ CanUpgradeLedgerTables (TrivialLedgerTables l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

data LedgerDB (m ∷ TypeType) (l ∷ LedgerStateKind) blk Source #

The core API of the LedgerDB component

Constructors

LedgerDB 

Fields

  • getVolatileTipSTM m (l EmptyMK)

    Get the empty ledger state at the (volatile) tip of the LedgerDB.

  • getImmutableTipSTM m (l EmptyMK)

    Get the empty ledger state at the immutable tip of the LedgerDB.

  • getPastLedgerStatePoint blk → STM m (Maybe (l EmptyMK))

    Get an empty ledger state at a requested point in the LedgerDB, if it exists.

  • getHeaderStateHistory ∷ l ~ ExtLedgerState blk ⇒ STM m (HeaderStateHistory blk)

    Get the header state history for all ledger states in the LedgerDB.

  • openForkerAtTarget ∷ Target (Point blk) → m (Either GetForkerError (Forker m l))

    Acquire a Forker at the requested point. If a ledger state associated with the requested point does not exist in the LedgerDB, it will return a GetForkerError.

    Note this will allocate resources; see the "Forker management in the running node" comment above.

  • validateFork ∷ (TraceValidateEvent blk → m ()) → BlockCache blk → Word64NonEmpty (Header blk) → SuccessForkerAction m l → m (ValidateResult l blk)

    Try to apply a sequence of blocks on top of the LedgerDB, first rolling back as many blocks as the passed Word64.

    The passed continuation will be executed if the result of validation is fully successful.

  • getPrevAppliedSTM m (Set (RealPoint blk))

    Get the references to blocks that have previously been applied.

  • garbageCollectSlotNo → m ()

    Garbage collect references to old state that is older than the given slot.

    Concretely, this affects:

    • Ledger states (and potentially underlying handles for on-disk storage).
    • The set of previously applied points.
  • tryTakeSnapshot ∷ m () → Maybe (Time, Time) → Word64 → m SnapCounters

    If the provided arguments indicate so (based on the SnapshotPolicy with which this LedgerDB was opened), take a snapshot and delete stale ones.

    The arguments are:

    • If a snapshot has been taken already, the time at which it was taken and the current time.
    • How many blocks have been processed since the last snapshot.
  • tryFlush ∷ m ()

    Flush V1 in-memory LedgerDB state to disk, if possible. This is a no-op for implementations that do not need an explicit flush function.

    Note that this is rate-limited by ldbShouldFlush.

  • closeDB ∷ m ()

    Close the LedgerDB

    Idempotent.

    Should only be called on shutdown.

Instances

Instances details
NoThunks (LedgerDB m l blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

noThunksContextLedgerDB m l blk → IO (Maybe ThunkInfo) Source #

wNoThunksContextLedgerDB m l blk → IO (Maybe ThunkInfo) Source #

showTypeOfProxy (LedgerDB m l blk) → String Source #

type HeaderHash (LedgerDB m l blk ∷ Type) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type HeaderHash (LedgerDB m l blk ∷ Type) = HeaderHash blk

type LedgerDB' (m ∷ TypeType) blk = LedgerDB m (ExtLedgerState blk) blk Source #

data LedgerDbPrune Source #

Options for prunning the LedgerDB

Constructors

LedgerDbPruneAll

Prune all states, keeping only the current tip.

LedgerDbPruneBeforeSlot SlotNo

Prune such that all (non-anchor) states are not older than the given slot.

Instances

Instances details
Show LedgerDbPrune Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type LedgerDbSerialiseConstraints blk = (Serialise (HeaderHash blk), EncodeDisk blk (LedgerState blk EmptyMK), DecodeDisk blk (LedgerState blk EmptyMK), EncodeDisk blk (AnnTip blk), DecodeDisk blk (AnnTip blk), EncodeDisk blk (ChainDepState (BlockProtocol blk)), DecodeDisk blk (ChainDepState (BlockProtocol blk)), MemPack (TxIn (LedgerState blk)), SerializeTablesWithHint (LedgerState blk), IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk))) Source #

Serialization constraints required by the LedgerDB to be properly instantiated with a blk.

type LedgerSupportsLedgerDB blk = LedgerSupportsLedgerDB' (LedgerState blk) blk Source #

type ResolveBlock (m ∷ TypeType) blk = RealPoint blk → m blk Source #

Resolve a block

Resolving a block reference to the actual block lives in m because it might need to read the block from disk (and can therefore not be done inside an STM transaction).

NOTE: The ledger DB will only ask the ChainDB for blocks it knows must exist. If the ChainDB is unable to fulfill the request, data corruption must have happened and the ChainDB should trigger validation mode.

currentPoint ∷ ∀ (l ∷ LedgerStateKind) blk (m ∷ TypeType). (GetTip l, HeaderHash l ~ HeaderHash blk, Functor (STM m)) ⇒ LedgerDB m l blk → STM m (Point blk) Source #

Initialization

data InitDB db (m ∷ TypeType) blk Source #

Functions required to initialize a LedgerDB

Constructors

InitDB 

Fields

data InitLog blk Source #

Initialization log

The initialization log records which snapshots from disk were considered, in which order, and why some snapshots were rejected. It is primarily useful for monitoring purposes.

Constructors

InitFromGenesis

Defaulted to initialization from genesis

NOTE: Unless the blockchain is near genesis, or this is the first time we boot the node, we should see this only if data corruption occurred.

InitFromSnapshot DiskSnapshot (RealPoint blk)

Used a snapshot corresponding to the specified tip

InitFailure DiskSnapshot (SnapshotFailure blk) (InitLog blk)

Initialization skipped a snapshot

We record the reason why it was skipped.

NOTE: We should only see this if data corruption occurred or codecs for snapshots changed.

Instances

Instances details
Generic (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Associated Types

type Rep (InitLog blk) 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

fromInitLog blk → Rep (InitLog blk) x #

toRep (InitLog blk) x → InitLog blk #

StandardHash blk ⇒ Show (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

showsPrecIntInitLog blk → ShowS #

showInitLog blk → String #

showList ∷ [InitLog blk] → ShowS #

StandardHash blk ⇒ Eq (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

(==)InitLog blk → InitLog blk → Bool #

(/=)InitLog blk → InitLog blk → Bool #

type Rep (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

initialize ∷ ∀ m (n ∷ TypeType) blk db st. (IOLike m, LedgerSupportsProtocol blk, InspectLedger blk, HasCallStack) ⇒ Tracer m (TraceReplayEvent blk) → Tracer m (TraceSnapshotEvent blk) → LedgerDbCfg (ExtLedgerState blk) → StreamAPI m blk blk → Point blk → InitDB db m blk → SnapshotManager m n blk st → Maybe DiskSnapshot → m (InitLog blk, db, Word64) Source #

Initialize the ledger DB from the most recent snapshot on disk

If no such snapshot can be found, use the genesis ledger DB. Returns the initialized DB as well as a log of the initialization and the number of blocks replayed between the snapshot and the tip of the immutable DB.

We do not catch any exceptions thrown during streaming; should any be thrown, it is the responsibility of the ChainDB to catch these and trigger (further) validation. We only discard snapshots if

  • We cannot deserialise them, or
  • they are ahead of the chain, they refer to a slot which is later than the last slot in the immutable db.

We do not attempt to use multiple ledger states from disk to construct the ledger DB. Instead we load only a single ledger state from disk, and compute all subsequent ones. This is important, because the ledger states obtained in this way will (hopefully) share much of their memory footprint with their predecessors.

Tracing

newtype ReplayGoal blk Source #

Which point the replay is expected to end at

Constructors

ReplayGoal (Point blk) 

Instances

Instances details
StandardHash blk ⇒ Show (ReplayGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

showsPrecIntReplayGoal blk → ShowS #

showReplayGoal blk → String #

showList ∷ [ReplayGoal blk] → ShowS #

StandardHash blk ⇒ Eq (ReplayGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

(==)ReplayGoal blk → ReplayGoal blk → Bool #

(/=)ReplayGoal blk → ReplayGoal blk → Bool #

newtype ReplayStart blk Source #

Which point the replay started from

Constructors

ReplayStart (Point blk) 

Instances

Instances details
StandardHash blk ⇒ Show (ReplayStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

showsPrecIntReplayStart blk → ShowS #

showReplayStart blk → String #

showList ∷ [ReplayStart blk] → ShowS #

StandardHash blk ⇒ Eq (ReplayStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Methods

(==)ReplayStart blk → ReplayStart blk → Bool #

(/=)ReplayStart blk → ReplayStart blk → Bool #

data TraceReplayProgressEvent blk Source #

We replayed the given block (reference) on the genesis snapshot during the initialisation of the LedgerDB. Used during ImmutableDB replay.

Using this trace the node could (if it so desired) easily compute a "percentage complete".

Constructors

ReplayedBlock 

Fields

Instances

Instances details
Generic (TraceReplayProgressEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Associated Types

type Rep (TraceReplayProgressEvent blk) 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

(StandardHash blk, InspectLedger blk) ⇒ Show (TraceReplayProgressEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

(StandardHash blk, InspectLedger blk) ⇒ Eq (TraceReplayProgressEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (TraceReplayProgressEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

data TraceReplayStartEvent blk Source #

Events traced while replaying blocks against the ledger to bring it up to date w.r.t. the tip of the ImmutableDB during initialisation. As this process takes a while, we trace events to inform higher layers of our progress.

Constructors

ReplayFromGenesis

There were no LedgerDB snapshots on disk, so we're replaying all blocks starting from Genesis against the initial ledger.

ReplayFromSnapshot

There was a LedgerDB snapshot on disk corresponding to the given tip. We're replaying more recent blocks against it.

Fields

Instances

Instances details
Generic (TraceReplayStartEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Associated Types

type Rep (TraceReplayStartEvent blk) 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (TraceReplayStartEvent blk) = D1 ('MetaData "TraceReplayStartEvent" "Ouroboros.Consensus.Storage.LedgerDB.API" "ouroboros-consensus-3.0.0.0-inplace" 'False) (C1 ('MetaCons "ReplayFromGenesis" 'PrefixI 'False) (U1TypeType) :+: C1 ('MetaCons "ReplayFromSnapshot" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 DiskSnapshot) :*: S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (ReplayStart blk))))
StandardHash blk ⇒ Show (TraceReplayStartEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

StandardHash blk ⇒ Eq (TraceReplayStartEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (TraceReplayStartEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (TraceReplayStartEvent blk) = D1 ('MetaData "TraceReplayStartEvent" "Ouroboros.Consensus.Storage.LedgerDB.API" "ouroboros-consensus-3.0.0.0-inplace" 'False) (C1 ('MetaCons "ReplayFromGenesis" 'PrefixI 'False) (U1TypeType) :+: C1 ('MetaCons "ReplayFromSnapshot" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 DiskSnapshot) :*: S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (ReplayStart blk))))

decorateReplayTracerWithGoal Source #

Arguments

∷ ∀ blk (m ∷ TypeType). Point blk

Tip of the ImmutableDB

Tracer m (TraceReplayProgressEvent blk) 
Tracer m (ReplayGoal blk → TraceReplayProgressEvent blk) 

Add the tip of the Immutable DB to the trace event

decorateReplayTracerWithStart Source #

Arguments

∷ ∀ blk (m ∷ TypeType). Point blk

Starting point of the replay

Tracer m (ReplayGoal blk → TraceReplayProgressEvent blk) 
Tracer m (ReplayStart blk → ReplayGoal blk → TraceReplayProgressEvent blk) 

Add the block at which a replay started.

Configuration

data LedgerDbCfgF (f ∷ TypeType) (l ∷ LedgerStateKind) Source #

Instances

Instances details
NoThunks (LedgerCfg l) ⇒ NoThunks (LedgerDbCfg l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Generic (LedgerDbCfgF f l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

Associated Types

type Rep (LedgerDbCfgF f l) 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (LedgerDbCfgF f l) = D1 ('MetaData "LedgerDbCfgF" "Ouroboros.Consensus.Storage.LedgerDB.API" "ouroboros-consensus-3.0.0.0-inplace" 'False) (C1 ('MetaCons "LedgerDbCfg" 'PrefixI 'True) (S1 ('MetaSel ('Just "ledgerDbCfgSecParam") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (HKD f SecurityParam)) :*: (S1 ('MetaSel ('Just "ledgerDbCfg") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (HKD f (LedgerCfg l))) :*: S1 ('MetaSel ('Just "ledgerDbCfgComputeLedgerEvents") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 ComputeLedgerEvents))))

Methods

fromLedgerDbCfgF f l → Rep (LedgerDbCfgF f l) x #

toRep (LedgerDbCfgF f l) x → LedgerDbCfgF f l #

type Rep (LedgerDbCfgF f l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type Rep (LedgerDbCfgF f l) = D1 ('MetaData "LedgerDbCfgF" "Ouroboros.Consensus.Storage.LedgerDB.API" "ouroboros-consensus-3.0.0.0-inplace" 'False) (C1 ('MetaCons "LedgerDbCfg" 'PrefixI 'True) (S1 ('MetaSel ('Just "ledgerDbCfgSecParam") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (HKD f SecurityParam)) :*: (S1 ('MetaSel ('Just "ledgerDbCfg") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (HKD f (LedgerCfg l))) :*: S1 ('MetaSel ('Just "ledgerDbCfgComputeLedgerEvents") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 ComputeLedgerEvents))))

Exceptions

data LedgerDbError Source #

Database error

Thrown upon incorrect use: invalid input.

Constructors

ClosedDBError PrettyCallStack

The LedgerDB is closed.

This will be thrown when performing some operations on the LedgerDB. The CallStack of the operation on the LedgerDB is included in the error.

Forker

openReadOnlyForker ∷ ∀ m (l ∷ LedgerStateKind) blk. MonadSTM m ⇒ LedgerDB m l blk → Target (Point blk) → m (Either GetForkerError (ReadOnlyForker m l)) Source #

getTipStatistics ∷ ∀ m (l ∷ LedgerStateKind) blk. IOLike m ⇒ LedgerDB m l blk → m Statistics Source #

Get statistics from the tip of the LedgerDB.

withTipForker ∷ ∀ m (l ∷ LedgerStateKind) blk a. IOLike m ⇒ LedgerDB m l blk → (Forker m l → m a) → m a Source #

bracket-style usage of a forker at the LedgerDB tip.

Snapshots

data SnapCounters Source #

Counters to keep track of when we made the last snapshot.

Constructors

SnapCounters 

Fields

Streaming

class StreamingBackend (m ∷ TypeType) backend (l ∷ (TypeTypeType) → Type) where Source #

A backend that supports streaming the ledger tables

Associated Types

data YieldArgs (m ∷ TypeType) backend (l ∷ (TypeTypeType) → Type) Source #

data SinkArgs (m ∷ TypeType) backend (l ∷ (TypeTypeType) → Type) Source #

Methods

yieldProxy backend → YieldArgs m backend l → Yield m l Source #

releaseYieldArgsYieldArgs m backend l → m () Source #

sinkProxy backend → SinkArgs m backend l → Sink m l Source #

releaseSinkArgsSinkArgs m backend l → m () Source #

Instances

Instances details
IOLike m ⇒ StreamingBackend m Mem l Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory

Methods

yieldProxy MemYieldArgs m Mem l → Yield m l Source #

releaseYieldArgsYieldArgs m Mem l → m () Source #

sinkProxy MemSinkArgs m Mem l → Sink m l Source #

releaseSinkArgsSinkArgs m Mem l → m () Source #

data Decoders (l ∷ LedgerStateKind) Source #

Constructors

Decoders (∀ s. Decoder s (TxIn l)) (∀ s. Decoder s (TxOut l)) 

Testing

data TestInternals (m ∷ TypeType) (l ∷ (TypeTypeType) → Type) blk Source #

Constructors

TestInternals 

Fields

Instances

Instances details
NoThunks (TestInternals m l blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.API

type TestInternals' (m ∷ TypeType) blk = TestInternals m (ExtLedgerState blk) blk Source #