ouroboros-consensus-0.21.0.0: Consensus layer for the Ouroboros blockchain protocol
Safe HaskellSafe-Inferred
LanguageHaskell2010

Ouroboros.Consensus.Storage.LedgerDB

Description

The Ledger DB is responsible for the following tasks:

  • Maintaining the in-memory ledger state at the tip: When we try to extend our chain with a new block fitting onto our tip, the block must first be validated using the right ledger state, i.e., the ledger state corresponding to the tip.
  • Maintaining the past \(k\) in-memory ledger states: we might roll back up to \(k\) blocks when switching to a more preferable fork. Consider the example below:

    Our current chain's tip is \(C_2\), but the fork containing blocks with tags \(F_1\), \(F_2\), and \(F_3\) is more preferable. We roll back our chain to the intersection point of the two chains, \(I\), which must be not more than \(k\) blocks back from our current tip. Next, we must validate block \(F_1\) using the ledger state at block \(I\), after which we can validate \(F_2\) using the resulting ledger state, and so on.

    This means that we need access to all ledger states of the past \(k\) blocks, i.e., the ledger states corresponding to the volatile part of the current chain. Note that applying a block to a ledger state is not an invertible operation, so it is not possible to simply unapply \(C_1\) and \(C_2\) to obtain \(I\).

    Access to the last \(k\) ledger states is not only needed for validating candidate chains, but also by the:

    • Local state query server: To query any of the past \(k\) ledger states.
    • Chain sync client: To validate headers of a chain that intersects with any of the past \(k\) blocks.
  • Storing snapshots on disk: To obtain a ledger state for the current tip of the chain, one has to apply all blocks in the chain one-by-one to the initial ledger state. When starting up the system with an on-disk chain containing millions of blocks, all of them would have to be read from disk and applied. This process can take hours, depending on the storage and CPU speed, and is thus too costly to perform on each startup.

    For this reason, a recent snapshot of the ledger state should be periodically written to disk. Upon the next startup, that snapshot can be read and used to restore the current ledger state, as well as the past volatile \(k\) ledger states.

(image code)

Expand
>>> import Image.LaTeX.Render
>>> import Control.Monad
>>> import System.Directory
>>> 
>>> createDirectoryIfMissing True "docs/haddocks/"
>>> :{
>>> either (error . show) pure =<<
>>> renderToFile "docs/haddocks/ledgerdb-switch.svg" defaultEnv (tikz ["positioning", "arrows"]) "\
>>> \ \\draw (0, 0) -- (50pt, 0) coordinate (I);\
>>> \  \\draw (I) -- ++(20pt,  20pt) coordinate (C1) -- ++(20pt, 0) coordinate (C2);\
>>> \  \\draw (I) -- ++(20pt, -20pt) coordinate (F1) -- ++(20pt, 0) coordinate (F2) -- ++(20pt, 0) coordinate (F3);\
>>> \  \\node at (I)  {$\\bullet$};\
>>> \  \\node at (C1) {$\\bullet$};\
>>> \  \\node at (C2) {$\\bullet$};\
>>> \  \\node at (F1) {$\\bullet$};\
>>> \  \\node at (F2) {$\\bullet$};\
>>> \  \\node at (F3) {$\\bullet$};\
>>> \  \\node at (I) [above left] {$I$};\
>>> \  \\node at (C1) [above] {$C_1$};\
>>> \  \\node at (C2) [above] {$C_2$};\
>>> \  \\node at (F1) [below] {$F_1$};\
>>> \  \\node at (F2) [below] {$F_2$};\
>>> \  \\node at (F3) [below] {$F_3$};\
>>> \  \\draw (60pt, 50pt) node {$\\overbrace{\\hspace{60pt}}$};\
>>> \  \\draw (60pt, 60pt) node[fill=white] {$k$};\
>>> \  \\draw [dashed] (30pt, -40pt) -- (30pt, 45pt);"
>>> :}
Synopsis

LedgerDB

newtype Checkpoint l Source #

Internal newtype wrapper around a ledger state l so that we can define a non-blanket Anchorable instance.

Constructors

Checkpoint 

Fields

Instances

Instances details
Generic (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Associated Types

type Rep (Checkpoint l) ∷ TypeType #

Methods

fromCheckpoint l → Rep (Checkpoint l) x #

toRep (Checkpoint l) x → Checkpoint l #

Show l ⇒ Show (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Methods

showsPrecIntCheckpoint l → ShowS #

showCheckpoint l → String #

showList ∷ [Checkpoint l] → ShowS #

Eq l ⇒ Eq (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Methods

(==)Checkpoint l → Checkpoint l → Bool #

(/=)Checkpoint l → Checkpoint l → Bool #

NoThunks l ⇒ NoThunks (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

GetTip l ⇒ Anchorable (WithOrigin SlotNo) (Checkpoint l) (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (Checkpoint l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (Checkpoint l) = D1 ('MetaData "Checkpoint" "Ouroboros.Consensus.Storage.LedgerDB.LedgerDB" "ouroboros-consensus-0.21.0.0-inplace" 'True) (C1 ('MetaCons "Checkpoint" 'PrefixI 'True) (S1 ('MetaSel ('Just "unCheckpoint") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 l)))

newtype LedgerDB l Source #

Internal state of the ledger DB

The ledger DB looks like

anchor |> snapshots <| current

where anchor records the oldest known snapshot and current the most recent. The anchor is the oldest point we can roll back to.

We take a snapshot after each block is applied and keep in memory a window of the last k snapshots. We have verified empirically (#1936) that the overhead of keeping k snapshots in memory is small, i.e., about 5% compared to keeping a snapshot every 100 blocks. This is thanks to sharing between consecutive snapshots.

As an example, suppose we have k = 6. The ledger DB grows as illustrated below, where we indicate the anchor number of blocks, the stored snapshots, and the current ledger.

anchor |> #   [ snapshots ]                   <| tip
---------------------------------------------------------------------------
G      |> (0) [ ]                             <| G
G      |> (1) [ L1]                           <| L1
G      |> (2) [ L1,  L2]                      <| L2
G      |> (3) [ L1,  L2,  L3]                 <| L3
G      |> (4) [ L1,  L2,  L3,  L4]            <| L4
G      |> (5) [ L1,  L2,  L3,  L4,  L5]       <| L5
G      |> (6) [ L1,  L2,  L3,  L4,  L5,  L6]  <| L6
L1     |> (6) [ L2,  L3,  L4,  L5,  L6,  L7]  <| L7
L2     |> (6) [ L3,  L4,  L5,  L6,  L7,  L8]  <| L8
L3     |> (6) [ L4,  L5,  L6,  L7,  L8,  L9]  <| L9   (*)
L4     |> (6) [ L5,  L6,  L7,  L8,  L9,  L10] <| L10
L5     |> (6) [*L6,  L7,  L8,  L9,  L10, L11] <| L11
L6     |> (6) [ L7,  L8,  L9,  L10, L11, L12] <| L12
L7     |> (6) [ L8,  L9,  L10, L12, L12, L13] <| L13
L8     |> (6) [ L9,  L10, L12, L12, L13, L14] <| L14

The ledger DB must guarantee that at all times we are able to roll back k blocks. For example, if we are on line (*), and roll back 6 blocks, we get

L3 |> []

Constructors

LedgerDB 

Instances

Instances details
Generic (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Associated Types

type Rep (LedgerDB l) ∷ TypeType #

Methods

fromLedgerDB l → Rep (LedgerDB l) x #

toRep (LedgerDB l) x → LedgerDB l #

Show l ⇒ Show (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Methods

showsPrecIntLedgerDB l → ShowS #

showLedgerDB l → String #

showList ∷ [LedgerDB l] → ShowS #

Eq l ⇒ Eq (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Methods

(==)LedgerDB l → LedgerDB l → Bool #

(/=)LedgerDB l → LedgerDB l → Bool #

NoThunks l ⇒ NoThunks (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

IsLedger l ⇒ GetTip (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Methods

getTipLedgerDB l → Point (LedgerDB l) Source #

type HeaderHash (LedgerDB l ∷ Type) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (LedgerDB l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (LedgerDB l) = D1 ('MetaData "LedgerDB" "Ouroboros.Consensus.Storage.LedgerDB.LedgerDB" "ouroboros-consensus-0.21.0.0-inplace" 'True) (C1 ('MetaCons "LedgerDB" 'PrefixI 'True) (S1 ('MetaSel ('Just "ledgerDbCheckpoints") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (AnchoredSeq (WithOrigin SlotNo) (Checkpoint l) (Checkpoint l)))))

data LedgerDbCfg l Source #

Instances

Instances details
Generic (LedgerDbCfg l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

Associated Types

type Rep (LedgerDbCfg l) ∷ TypeType #

Methods

fromLedgerDbCfg l → Rep (LedgerDbCfg l) x #

toRep (LedgerDbCfg l) x → LedgerDbCfg l #

NoThunks (LedgerCfg l) ⇒ NoThunks (LedgerDbCfg l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (LedgerDbCfg l) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.LedgerDB

type Rep (LedgerDbCfg l) = D1 ('MetaData "LedgerDbCfg" "Ouroboros.Consensus.Storage.LedgerDB.LedgerDB" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "LedgerDbCfg" 'PrefixI 'True) (S1 ('MetaSel ('Just "ledgerDbCfgSecParam") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 SecurityParam) :*: S1 ('MetaSel ('Just "ledgerDbCfg") 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (LedgerCfg l))))

Initialization

data InitLog blk Source #

Initialization log

The initialization log records which snapshots from disk were considered, in which order, and why some snapshots were rejected. It is primarily useful for monitoring purposes.

Constructors

InitFromGenesis

Defaulted to initialization from genesis

NOTE: Unless the blockchain is near genesis, we should see this only if data corrupted occurred.

InitFromSnapshot DiskSnapshot (RealPoint blk)

Used a snapshot corresponding to the specified tip

InitFailure DiskSnapshot (SnapshotFailure blk) (InitLog blk)

Initialization skipped a snapshot

We record the reason why it was skipped.

NOTE: We should only see this if data corrupted occurred.

Instances

Instances details
Generic (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Associated Types

type Rep (InitLog blk) ∷ TypeType #

Methods

fromInitLog blk → Rep (InitLog blk) x #

toRep (InitLog blk) x → InitLog blk #

StandardHash blk ⇒ Show (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

showsPrecIntInitLog blk → ShowS #

showInitLog blk → String #

showList ∷ [InitLog blk] → ShowS #

StandardHash blk ⇒ Eq (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

(==)InitLog blk → InitLog blk → Bool #

(/=)InitLog blk → InitLog blk → Bool #

type Rep (InitLog blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

newtype ReplayStart blk Source #

Which point the replay started from

Constructors

ReplayStart (Point blk) 

Instances

Instances details
StandardHash blk ⇒ Show (ReplayStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

showsPrecIntReplayStart blk → ShowS #

showReplayStart blk → String #

showList ∷ [ReplayStart blk] → ShowS #

StandardHash blk ⇒ Eq (ReplayStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

(==)ReplayStart blk → ReplayStart blk → Bool #

(/=)ReplayStart blk → ReplayStart blk → Bool #

initLedgerDB Source #

Arguments

∷ ∀ m blk. (IOLike m, LedgerSupportsProtocol blk, InspectLedger blk, HasCallStack) 
Tracer m (ReplayGoal blk → TraceReplayEvent blk) 
Tracer m (TraceSnapshotEvent blk) 
SomeHasFS m 
→ (∀ s. Decoder s (ExtLedgerState blk)) 
→ (∀ s. Decoder s (HeaderHash blk)) 
LedgerDbCfg (ExtLedgerState blk) 
→ m (ExtLedgerState blk)

Genesis ledger state

StreamAPI m blk blk 
Flag "DoDiskSnapshotChecksum" 
→ m (InitLog blk, LedgerDB' blk, Word64) 

Initialize the ledger DB from the most recent snapshot on disk

If no such snapshot can be found, use the genesis ledger DB. Returns the initialized DB as well as the block reference corresponding to the snapshot we found on disk (the latter primarily for testing/monitoring purposes).

We do not catch any exceptions thrown during streaming; should any be thrown, it is the responsibility of the ChainDB to catch these and trigger (further) validation. We only discard snapshots if

  • We cannot deserialise them, or
  • they are ahead of the chain

It is possible that the Ledger DB will not be able to roll back k blocks after initialization if the chain has been truncated (data corruption).

We do not attempt to use multiple ledger states from disk to construct the ledger DB. Instead we load only a single ledger state from disk, and compute all subsequent ones. This is important, because the ledger states obtained in this way will (hopefully) share much of their memory footprint with their predecessors.

Trace

newtype ReplayGoal blk Source #

Which point the replay is expected to end at

Constructors

ReplayGoal (Point blk) 

Instances

Instances details
StandardHash blk ⇒ Show (ReplayGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

showsPrecIntReplayGoal blk → ShowS #

showReplayGoal blk → String #

showList ∷ [ReplayGoal blk] → ShowS #

StandardHash blk ⇒ Eq (ReplayGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

(==)ReplayGoal blk → ReplayGoal blk → Bool #

(/=)ReplayGoal blk → ReplayGoal blk → Bool #

data TraceReplayEvent blk Source #

Events traced while replaying blocks against the ledger to bring it up to date w.r.t. the tip of the ImmutableDB during initialisation. As this process takes a while, we trace events to inform higher layers of our progress.

Constructors

ReplayFromGenesis

There were no LedgerDB snapshots on disk, so we're replaying all blocks starting from Genesis against the initial ledger.

Fields

  • (ReplayGoal blk)

    the block at the tip of the ImmutableDB | There was a LedgerDB snapshot on disk corresponding to the given tip. We're replaying more recent blocks against it.

ReplayFromSnapshot 

Fields

  • DiskSnapshot
     
  • (ReplayStart blk)

    the block at which this replay started

  • (ReplayGoal blk)

    the block at the tip of the ImmutableDB | We replayed the given block (reference) on the genesis snapshot during the initialisation of the LedgerDB. Used during ImmutableDB replay.

ReplayedBlock 

Fields

Instances

Instances details
Generic (TraceReplayEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Associated Types

type Rep (TraceReplayEvent blk) ∷ TypeType #

Methods

fromTraceReplayEvent blk → Rep (TraceReplayEvent blk) x #

toRep (TraceReplayEvent blk) x → TraceReplayEvent blk #

(StandardHash blk, InspectLedger blk) ⇒ Show (TraceReplayEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

(StandardHash blk, InspectLedger blk) ⇒ Eq (TraceReplayEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

Methods

(==)TraceReplayEvent blk → TraceReplayEvent blk → Bool #

(/=)TraceReplayEvent blk → TraceReplayEvent blk → Bool #

type Rep (TraceReplayEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Init

decorateReplayTracerWithGoal Source #

Arguments

Point blk

Tip of the ImmutableDB

Tracer m (TraceReplayEvent blk) 
Tracer m (ReplayGoal blk → TraceReplayEvent blk) 

Add the tip of the Immutable DB to the trace event

Between the tip of the immutable DB and the point of the starting block, the node could (if it so desired) easily compute a "percentage complete".

decorateReplayTracerWithStart Source #

Arguments

Point blk

Starting point of the replay

Tracer m (ReplayGoal blk → TraceReplayEvent blk) 
Tracer m (ReplayStart blk → ReplayGoal blk → TraceReplayEvent blk) 

Add the block at which a replay started.

This allows to compute a "percentage complete" when tracing the events.

Querying

ledgerDbAnchorLedgerDB l → l Source #

Information about the state of the ledger at the anchor

ledgerDbCurrentGetTip l ⇒ LedgerDB l → l Source #

The ledger state at the tip of the chain

ledgerDbIsSaturatedGetTip l ⇒ SecurityParamLedgerDB l → Bool Source #

Have we seen at least k blocks?

ledgerDbMaxRollbackGetTip l ⇒ LedgerDB l → Word64 Source #

How many blocks can we currently roll back?

ledgerDbPast ∷ (HasHeader blk, IsLedger l, HeaderHash l ~ HeaderHash blk) ⇒ Point blk → LedgerDB l → Maybe l Source #

Get a past ledger state

\( O(\log(\min(i,n-i)) \)

When no ledger state (or anchor) has the given Point, Nothing is returned.

ledgerDbSnapshotsLedgerDB l → [(Word64, l)] Source #

All snapshots currently stored by the ledger DB (new to old)

This also includes the snapshot at the anchor. For each snapshot we also return the distance from the tip.

ledgerDbTipGetTip l ⇒ LedgerDB l → Point l Source #

Reference to the block at the tip of the chain

Updates

Construct

ledgerDbWithAnchorGetTip l ⇒ l → LedgerDB l Source #

Ledger DB starting at the specified ledger state

Applying blocks

data AnnLedgerError l blk Source #

Annotated ledger errors

Constructors

AnnLedgerError 

Fields

Instances

Instances details
Monad m ⇒ ThrowsLedgerError (ExceptT (AnnLedgerError l blk) m) l blk Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

throwLedgerErrorLedgerDB l → RealPoint blk → LedgerErr l → ExceptT (AnnLedgerError l blk) m a Source #

data Ap m l blk c where Source #

Ap is used to pass information about blocks to ledger DB updates

The constructors serve two purposes:

  • Specify the various parameters a. Are we passing the block by value or by reference? b. Are we applying or reapplying the block?
  • Compute the constraint c on the monad m in order to run the query: a. If we are passing a block by reference, we must be able to resolve it. b. If we are applying rather than reapplying, we might have ledger errors.

Constructors

ReapplyVal ∷ blk → Ap m l blk () 
ApplyVal ∷ blk → Ap m l blk (ThrowsLedgerError m l blk) 
ReapplyRefRealPoint blk → Ap m l blk (ResolvesBlocks m blk) 
ApplyRefRealPoint blk → Ap m l blk (ResolvesBlocks m blk, ThrowsLedgerError m l blk) 
Weaken ∷ (c' ⇒ c) ⇒ Ap m l blk c → Ap m l blk c'

Weaken increases the constraint on the monad m.

This is primarily useful when combining multiple Aps in a single homogeneous structure.

data ExceededRollback Source #

Exceeded maximum rollback supported by the current ledger DB state

Under normal circumstances this will not arise. It can really only happen in the presence of data corruption (or when switching to a shorter fork, but that is disallowed by all currently known Ouroboros protocols).

Records both the supported and the requested rollback.

class Monad m ⇒ ThrowsLedgerError m l blk where Source #

Methods

throwLedgerErrorLedgerDB l → RealPoint blk → LedgerErr l → m a Source #

Instances

Instances details
Monad m ⇒ ThrowsLedgerError (ExceptT (AnnLedgerError l blk) m) l blk Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

throwLedgerErrorLedgerDB l → RealPoint blk → LedgerErr l → ExceptT (AnnLedgerError l blk) m a Source #

Block resolution

type ResolveBlock m blk = RealPoint blk → m blk Source #

Resolve a block

Resolving a block reference to the actual block lives in m because it might need to read the block from disk (and can therefore not be done inside an STM transaction).

NOTE: The ledger DB will only ask the ChainDB for blocks it knows must exist. If the ChainDB is unable to fulfill the request, data corruption must have happened and the ChainDB should trigger validation mode.

class Monad m ⇒ ResolvesBlocks m blk | m → blk where Source #

Monads in which we can resolve blocks

To guide type inference, we insist that we must be able to infer the type of the block we are resolving from the type of the monad.

Instances

Instances details
Monad m ⇒ ResolvesBlocks (ExceptT e (ReaderT (ResolveBlock m blk) m)) blk Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Monad m ⇒ ResolvesBlocks (ReaderT (ResolveBlock m blk) m) blk Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

defaultResolveBlocksResolveBlock m blk → ReaderT (ResolveBlock m blk) m a → m a Source #

Operations

ledgerDbBimapAnchorable (WithOrigin SlotNo) a b ⇒ (l → a) → (l → b) → LedgerDB l → AnchoredSeq (WithOrigin SlotNo) a b Source #

Transform the underlying AnchoredSeq using the given functions.

ledgerDbPruneGetTip l ⇒ SecurityParamLedgerDB l → LedgerDB l Source #

Prune snapshots until at we have at most k snapshots in the LedgerDB, excluding the snapshots stored at the anchor.

ledgerDbPush ∷ ∀ m c l blk. (ApplyBlock l blk, Monad m, c) ⇒ LedgerDbCfg l → Ap m l blk c → LedgerDB l → m (LedgerDB l) Source #

ledgerDbSwitch Source #

Arguments

∷ (ApplyBlock l blk, Monad m, c) 
LedgerDbCfg l 
Word64

How many blocks to roll back

→ (UpdateLedgerDbTraceEvent blk → m ()) 
→ [Ap m l blk c]

New blocks to apply

LedgerDB l 
→ m (Either ExceededRollback (LedgerDB l)) 

Switch to a fork

Pure API

ledgerDbPush'ApplyBlock l blk ⇒ LedgerDbCfg l → blk → LedgerDB l → LedgerDB l Source #

ledgerDbPushMany'ApplyBlock l blk ⇒ LedgerDbCfg l → [blk] → LedgerDB l → LedgerDB l Source #

ledgerDbSwitch' ∷ ∀ l blk. ApplyBlock l blk ⇒ LedgerDbCfg l → Word64 → [blk] → LedgerDB l → Maybe (LedgerDB l) Source #

Trace

newtype PushGoal blk Source #

Constructors

PushGoal 

Fields

Instances

Instances details
StandardHash blk ⇒ Show (PushGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

showsPrecIntPushGoal blk → ShowS #

showPushGoal blk → String #

showList ∷ [PushGoal blk] → ShowS #

StandardHash blk ⇒ Eq (PushGoal blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

(==)PushGoal blk → PushGoal blk → Bool #

(/=)PushGoal blk → PushGoal blk → Bool #

newtype PushStart blk Source #

Constructors

PushStart 

Fields

Instances

Instances details
StandardHash blk ⇒ Show (PushStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

showsPrecIntPushStart blk → ShowS #

showPushStart blk → String #

showList ∷ [PushStart blk] → ShowS #

StandardHash blk ⇒ Eq (PushStart blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

(==)PushStart blk → PushStart blk → Bool #

(/=)PushStart blk → PushStart blk → Bool #

newtype Pushing blk Source #

Constructors

Pushing 

Fields

Instances

Instances details
StandardHash blk ⇒ Show (Pushing blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

showsPrecIntPushing blk → ShowS #

showPushing blk → String #

showList ∷ [Pushing blk] → ShowS #

StandardHash blk ⇒ Eq (Pushing blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Methods

(==)Pushing blk → Pushing blk → Bool #

(/=)Pushing blk → Pushing blk → Bool #

data UpdateLedgerDbTraceEvent blk Source #

Constructors

StartedPushingBlockToTheLedgerDb

Event fired when we are about to push a block to the LedgerDB

Fields

  • !(PushStart blk)

    Point from which we started pushing new blocks

  • (PushGoal blk)

    Point to which we are updating the ledger, the last event StartedPushingBlockToTheLedgerDb will have Pushing and PushGoal wrapping over the same RealPoint

  • !(Pushing blk)

    Point which block we are about to push

Instances

Instances details
Generic (UpdateLedgerDbTraceEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

Associated Types

type Rep (UpdateLedgerDbTraceEvent blk) ∷ TypeType #

StandardHash blk ⇒ Show (UpdateLedgerDbTraceEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

StandardHash blk ⇒ Eq (UpdateLedgerDbTraceEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

type Rep (UpdateLedgerDbTraceEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Update

type Rep (UpdateLedgerDbTraceEvent blk) = D1 ('MetaData "UpdateLedgerDbTraceEvent" "Ouroboros.Consensus.Storage.LedgerDB.Update" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "StartedPushingBlockToTheLedgerDb" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (PushStart blk)) :*: (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (PushGoal blk)) :*: S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'SourceStrict 'DecidedStrict) (Rec0 (Pushing blk)))))

Snapshots

data DiskSnapshot Source #

Name of a disk snapshot.

The snapshot itself might not yet exist on disk.

Constructors

DiskSnapshot 

Fields

  • dsNumberWord64

    Snapshots are numbered. We will try the snapshots with the highest number first.

    When creating a snapshot, we use the slot number of the ledger state it corresponds to as the snapshot number. This gives an indication of how recent the snapshot is.

    Note that the snapshot names are only indicative, we don't rely on the snapshot number matching the slot number of the corresponding ledger state. We only use the snapshots numbers to determine the order in which we try them.

  • dsSuffixMaybe String

    Snapshots can optionally have a suffix, separated by the snapshot number with an underscore, e.g., 4492799_last_Byron. This suffix acts as metadata for the operator of the node. Snapshots with a suffix will not be trimmed.

Instances

Instances details
Generic DiskSnapshot Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Associated Types

type Rep DiskSnapshotTypeType #

Show DiskSnapshot Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Eq DiskSnapshot Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Ord DiskSnapshot Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

type Rep DiskSnapshot Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

type Rep DiskSnapshot = D1 ('MetaData "DiskSnapshot" "Ouroboros.Consensus.Storage.LedgerDB.Snapshots" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "DiskSnapshot" 'PrefixI 'True) (S1 ('MetaSel ('Just "dsNumber") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Word64) :*: S1 ('MetaSel ('Just "dsSuffix") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (Maybe String))))

Read from disk

data SnapshotFailure blk Source #

Constructors

InitFailureRead ReadSnapshotErr

We failed to deserialise the snapshot

This can happen due to data corruption in the ledger DB.

InitFailureTooRecent (RealPoint blk)

This snapshot is too recent (ahead of the tip of the chain)

InitFailureGenesis

This snapshot was of the ledger state at genesis, even though we never take snapshots at genesis, so this is unexpected.

Instances

Instances details
Generic (SnapshotFailure blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Associated Types

type Rep (SnapshotFailure blk) ∷ TypeType #

Methods

fromSnapshotFailure blk → Rep (SnapshotFailure blk) x #

toRep (SnapshotFailure blk) x → SnapshotFailure blk #

StandardHash blk ⇒ Show (SnapshotFailure blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Methods

showsPrecIntSnapshotFailure blk → ShowS #

showSnapshotFailure blk → String #

showList ∷ [SnapshotFailure blk] → ShowS #

StandardHash blk ⇒ Eq (SnapshotFailure blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Methods

(==)SnapshotFailure blk → SnapshotFailure blk → Bool #

(/=)SnapshotFailure blk → SnapshotFailure blk → Bool #

type Rep (SnapshotFailure blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

type Rep (SnapshotFailure blk) = D1 ('MetaData "SnapshotFailure" "Ouroboros.Consensus.Storage.LedgerDB.Snapshots" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "InitFailureRead" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 ReadSnapshotErr)) :+: (C1 ('MetaCons "InitFailureTooRecent" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 (RealPoint blk))) :+: C1 ('MetaCons "InitFailureGenesis" 'PrefixI 'False) (U1TypeType)))

diskSnapshotIsTemporaryDiskSnapshotBool Source #

The snapshots that are periodically created are temporary, they will be deleted when trimming

listSnapshotsMonad m ⇒ SomeHasFS m → m [DiskSnapshot] Source #

List on-disk snapshots, highest number first.

pattern DoDiskSnapshotChecksumFlag "DoDiskSnapshotChecksum" Source #

Type-safe flag to regulate the checksum policy of the ledger state snapshots.

These patterns are exposed to cardano-node and will be passed as part of DiskPolicy.

pattern NoDoDiskSnapshotChecksumFlag "DoDiskSnapshotChecksum" Source #

Type-safe flag to regulate the checksum policy of the ledger state snapshots.

These patterns are exposed to cardano-node and will be passed as part of DiskPolicy.

readSnapshot ∷ ∀ m blk. IOLike m ⇒ SomeHasFS m → (∀ s. Decoder s (ExtLedgerState blk)) → (∀ s. Decoder s (HeaderHash blk)) → Flag "DoDiskSnapshotChecksum" → DiskSnapshotExceptT ReadSnapshotErr m (ExtLedgerState blk) Source #

Read snapshot from disk.

Fail on data corruption, i.e. when the checksum of the read data differs from the one tracked by DiskSnapshot.

Write to disk

takeSnapshot ∷ ∀ m blk. (MonadThrow m, MonadMonotonicTime m, IsLedger (LedgerState blk)) ⇒ Tracer m (TraceSnapshotEvent blk) → SomeHasFS m → Flag "DoDiskSnapshotChecksum" → (ExtLedgerState blk → Encoding) → ExtLedgerState blk → m (Maybe (DiskSnapshot, RealPoint blk)) Source #

Take a snapshot of the oldest ledger state in the ledger DB

We write the oldest ledger state to disk because the intention is to only write ledger states to disk that we know to be immutable. Primarily for testing purposes, takeSnapshot returns the block reference corresponding to the snapshot that we wrote.

If a snapshot with the same number already exists on disk or if the tip is at genesis, no snapshot is taken.

Note that an EBB can have the same slot number and thus snapshot number as the block after it. This doesn't matter. The one block difference in the ledger state doesn't warrant an additional snapshot. The number in the name of the snapshot is only indicative, we don't rely on it being correct.

NOTE: This is a lower-level API that takes a snapshot independent from whether this snapshot corresponds to a state that is more than k back.

TODO: Should we delete the file if an error occurs during writing?

trimSnapshotsMonad m ⇒ Tracer m (TraceSnapshotEvent r) → SomeHasFS m → DiskPolicy → m [DiskSnapshot] Source #

Trim the number of on disk snapshots so that at most onDiskNumSnapshots snapshots are stored on disk. The oldest snapshots are deleted.

The deleted snapshots are returned.

writeSnapshot ∷ ∀ m blk. MonadThrow m ⇒ SomeHasFS m → Flag "DoDiskSnapshotChecksum" → (ExtLedgerState blk → Encoding) → DiskSnapshotExtLedgerState blk → m () Source #

Write a ledger state snapshot to disk

This function writes two files: * the snapshot file itself, with the name generated by snapshotToPath * the checksum file, with the name generated by snapshotToChecksumPath

Low-level API (primarily exposed for testing)

decodeSnapshotBackwardsCompatible ∷ ∀ l blk. Proxy blk → (∀ s. Decoder s l) → (∀ s. Decoder s (HeaderHash blk)) → ∀ s. Decoder s l Source #

To remain backwards compatible with existing snapshots stored on disk, we must accept the old format as well as the new format.

The old format: * The tip: WithOrigin (RealPoint blk) * The chain length: Word64 * The ledger state: l

The new format is described by snapshotEncodingVersion1.

This decoder will accept and ignore them. The encoder (encodeSnapshot) will no longer encode them.

deleteSnapshotMonad m ⇒ HasCallStackSomeHasFS m → DiskSnapshot → m () Source #

Delete snapshot from disk

encodeSnapshot ∷ (l → Encoding) → l → Encoding Source #

Encoder to be used in combination with decodeSnapshotBackwardsCompatible.

Trace

data TraceSnapshotEvent blk Source #

Constructors

InvalidSnapshot DiskSnapshot (SnapshotFailure blk)

An on disk snapshot was skipped because it was invalid.

TookSnapshot DiskSnapshot (RealPoint blk) EnclosingTimed

A snapshot was written to disk.

DeletedSnapshot DiskSnapshot

An old or invalid on-disk snapshot was deleted.

SnapshotMissingChecksum DiskSnapshot

The checksum file for a snapshot was missing and was not checked

Instances

Instances details
Generic (TraceSnapshotEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Associated Types

type Rep (TraceSnapshotEvent blk) ∷ TypeType #

StandardHash blk ⇒ Show (TraceSnapshotEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

StandardHash blk ⇒ Eq (TraceSnapshotEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

type Rep (TraceSnapshotEvent blk) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.Snapshots

Disk policy

data DiskPolicy Source #

On-disk policy

We only write ledger states that are older than k blocks to disk (that is, snapshots that are guaranteed valid). The on-disk policy determines how often we write to disk and how many checkpoints we keep.

Constructors

DiskPolicy 

Fields

  • onDiskNumSnapshotsWord

    How many snapshots do we want to keep on disk?

    A higher number of on-disk snapshots is primarily a safe-guard against disk corruption: it trades disk space for reliability.

    Examples:

    • 0: Delete the snapshot immediately after writing. Probably not a useful value :-D
    • 1: Delete the previous snapshot immediately after writing the next Dangerous policy: if for some reason the deletion happens before the new snapshot is written entirely to disk (we don't fsync), we have no choice but to start at the genesis snapshot on the next startup.
    • 2: Always keep 2 snapshots around. This means that when we write the next snapshot, we delete the oldest one, leaving the middle one available in case of truncation of the write. This is probably a sane value in most circumstances.
  • onDiskShouldTakeSnapshotTimeSinceLast DiffTimeWord64Bool

    Should we write a snapshot of the ledger state to disk?

    This function is passed two bits of information:

    • The time since the last snapshot, or NoSnapshotTakenYet if none was taken yet. Note that NoSnapshotTakenYet merely means no snapshot had been taking yet since the node was started; it does not necessarily mean that none exist on disk.
    • The distance in terms of blocks applied to the oldest ledger snapshot in memory. During normal operation, this is the number of blocks written to the ImmutableDB since the last snapshot. On startup, it is computed by counting how many immutable blocks we had to reapply to get to the chain tip. This is useful, as it allows the policy to decide to take a snapshot on node startup if a lot of blocks had to be replayed.

    See also mkDiskPolicy

  • onDiskShouldChecksumSnapshotsFlag "DoDiskSnapshotChecksum"

    Whether or not to checksum the ledger snapshots to detect data corruption on disk. "yes" if DoDiskSnapshotChecksum; "no" if NoDoDiskSnapshotChecksum.

data DiskPolicyArgs Source #

The components used by cardano-node to construct a DiskPolicy.

Constructors

DiskPolicyArgs SnapshotInterval NumOfDiskSnapshots (Flag "DoDiskSnapshotChecksum") 

data NumOfDiskSnapshots Source #

Number of snapshots to be stored on disk. This is either the default value as determined by the DiskPolicy, or it is provided by the user. See the DiskPolicy documentation for more information.

Instances

Instances details
Generic NumOfDiskSnapshots Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Associated Types

type Rep NumOfDiskSnapshotsTypeType #

Show NumOfDiskSnapshots Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Eq NumOfDiskSnapshots Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

type Rep NumOfDiskSnapshots Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

type Rep NumOfDiskSnapshots = D1 ('MetaData "NumOfDiskSnapshots" "Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "DefaultNumOfDiskSnapshots" 'PrefixI 'False) (U1TypeType) :+: C1 ('MetaCons "RequestedNumOfDiskSnapshots" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Word)))

data SnapshotInterval Source #

Length of time, requested by the user, that has to pass after which a snapshot is taken. It can be:

  1. either explicitly provided by user in seconds
  2. or default value can be requested - the specific DiskPolicy determines what that is exactly, see mkDiskPolicy as an example

Instances

Instances details
Generic SnapshotInterval Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Associated Types

type Rep SnapshotIntervalTypeType #

Show SnapshotInterval Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Eq SnapshotInterval Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

type Rep SnapshotInterval Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

type Rep SnapshotInterval = D1 ('MetaData "SnapshotInterval" "Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy" "ouroboros-consensus-0.21.0.0-inplace" 'False) (C1 ('MetaCons "DefaultSnapshotInterval" 'PrefixI 'False) (U1TypeType) :+: C1 ('MetaCons "RequestedSnapshotInterval" 'PrefixI 'False) (S1 ('MetaSel ('NothingMaybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 DiffTime)))

data TimeSinceLast time Source #

Constructors

NoSnapshotTakenYet 
TimeSinceLast time 

Instances

Instances details
Functor TimeSinceLast Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Methods

fmap ∷ (a → b) → TimeSinceLast a → TimeSinceLast b #

(<$) ∷ a → TimeSinceLast b → TimeSinceLast a #

Show time ⇒ Show (TimeSinceLast time) Source # 
Instance details

Defined in Ouroboros.Consensus.Storage.LedgerDB.DiskPolicy

Methods

showsPrecIntTimeSinceLast time → ShowS #

showTimeSinceLast time → String #

showList ∷ [TimeSinceLast time] → ShowS #

defaultDiskPolicyArgsDiskPolicyArgs Source #

Default on-disk policy arguments suitable to use with cardano-node