Config
This document provides a comprehensive reference for all configuration options available in Evolve. Understanding these configurations will help you tailor Evolve's behavior to your specific needs, whether you're running an aggregator, a full node, or a light client.
Table of Contents
- DA-Only Sync Mode
- Introduction to Configurations
- Base Configuration
- Node Configuration (
node) - Pruning Configuration (
pruning) - Data Availability Configuration (
da) - P2P Configuration (
p2p) - RPC Configuration (
rpc) - Instrumentation Configuration (
instrumentation) - Logging Configuration (
log) - Signer Configuration (
signer) - Raft Configuration (
raft)
DA-Only Sync Mode
Evolve supports running nodes that sync exclusively from the Data Availability (DA) layer without participating in P2P networking. This mode is useful for:
- Pure DA followers: Nodes that only need the canonical chain data from DA
- Resource optimization: Reducing network overhead by avoiding P2P gossip
- Simplified deployment: No need to configure or maintain P2P peer connections
- Isolated environments: Nodes that should not participate in P2P communication
To enable DA-only sync mode:
Leave P2P peers empty (default behavior):
yamlp2p: peers: "" # Empty or omit this field entirelyConfigure DA connection (required):
yamlda: address: "your-da-service:port" namespace: "your-namespace" # ... other DA configurationOptional: You can still configure P2P listen address for potential future connections, but without peers, no P2P networking will occur.
When running in DA-only mode, the node will:
- ✅ Sync blocks and headers from the DA layer
- ✅ Validate transactions and maintain state
- ✅ Serve RPC requests
- ❌ Not participate in P2P gossip or peer discovery
- ❌ Not share blocks with other nodes via P2P
- ❌ Not receive transactions via P2P (only from direct RPC submission)
Configs
Evolve configurations can be managed through a YAML file (typically evnode.yml located in ~/.evolve/config/ or <your_home_dir>/config/) and command-line flags. The system prioritizes configurations in the following order (highest priority first):
- Command-line flags: Override all other settings.
- YAML configuration file: Values specified in the
config.yamlfile. - Default values: Predefined defaults within Evolve.
Environment variables can also be used, typically prefixed with your executable's name (e.g., YOURAPP_CHAIN_ID="my-chain").
Base Configuration
These are fundamental settings for your Evolve node.
Root Directory
Description: The root directory where Evolve stores its data, including the database and configuration files. This is a foundational setting that dictates where all other file paths are resolved from.
YAML: This option is not set within the YAML configuration file itself, as it specifies the location of the configuration file and other application data.
Command-line Flag:--home <path>Example: --home /mnt/data/evolve_nodeDefault: ~/.evolve (or a directory derived from the application name if defaultHome is customized). Constant: FlagRootDir
Database Path
Description: The path, relative to the Root Directory, where the Evolve database will be stored. This database contains blockchain state, blocks, and other critical node data.
YAML: Set this in your configuration file at the top level:
db_path: "data"Command-line Flag:--evnode.db_path <path>Example: --evnode.db_path "node_db"Default: "data"Constant: FlagDBPath
Chain ID
Description: The unique identifier for your chain. This ID is used to differentiate your network from others and is crucial for network communication and transaction validation.
YAML: Set this in your configuration file at the top level:
chain_id: "my-evolve-chain"Command-line Flag:--chain_id <string>Example: --chain_id "super_rollup_testnet_v1"Default: "evolve"Constant: FlagChainID
Node Configuration (node)
Settings related to the core behavior of the Evolve node, including its mode of operation and block production parameters.
YAML Section:
node:
# ... node configurations ...Aggregator Mode
Description: If true, the node runs in aggregator mode. Aggregators are responsible for producing blocks by collecting transactions, ordering them, and proposing them to the network.
YAML:
node:
aggregator: trueCommand-line Flag:--evnode.node.aggregator (boolean, presence enables it) Example: --evnode.node.aggregatorDefault: falseConstant: FlagAggregator
Based Sequencer Mode
Description: If true, the node runs in based sequencer mode. In this mode the aggregator only processes transactions fetched from the DA forced inclusion namespace rather than from the P2P mempool. Requires aggregator mode to be enabled.
YAML:
node:
based_sequencer: trueCommand-line Flag:--evnode.node.based_sequencer (boolean, presence enables it) Example: --evnode.node.based_sequencerDefault: falseConstant: FlagBasedSequencer
Light Client Mode
Description: If true, the node runs in light client mode. Light clients rely on full nodes for block headers and state information, offering a lightweight way to interact with the chain without storing all data.
YAML:
node:
light: trueCommand-line Flag:--evnode.node.light (boolean, presence enables it) Example: --evnode.node.lightDefault: falseConstant: FlagLight
Block Time
Description: The target time interval between consecutive blocks produced by an aggregator. This duration (e.g., "500ms", "1s", "5s") dictates the pace of block production.
YAML:
node:
block_time: "1s"Command-line Flag:--evnode.node.block_time <duration>Example: --evnode.node.block_time 2sDefault: "1s"Constant: FlagBlockTime
Maximum Pending Headers and Data
Description: The maximum number of headers or data items that can be pending Data Availability (DA) confirmation. When this limit is reached, the aggregator pauses block production until some are confirmed on the DA layer. Use 0 for no limit. This helps manage resource usage and DA layer capacity.
YAML:
node:
max_pending_headers_and_data: 100Command-line Flag:--evnode.node.max_pending_headers_and_data <uint64>Example: --evnode.node.max_pending_headers_and_data 50Default: 0 (no limit) Constant: FlagMaxPendingHeadersAndData
Lazy Mode (Lazy Aggregator)
Description: Enables lazy aggregation mode. In this mode, blocks are produced only when new transactions are available in the mempool or after the lazy_block_interval has passed. This optimizes resource usage by avoiding the creation of empty blocks during periods of inactivity.
YAML:
node:
lazy_mode: trueCommand-line Flag:--evnode.node.lazy_mode (boolean, presence enables it) Example: --evnode.node.lazy_modeDefault: falseConstant: FlagLazyAggregator
Lazy Block Interval
Description: The maximum time interval between blocks when running in lazy aggregation mode (lazy_mode). This ensures that blocks are produced periodically even if there are no new transactions, keeping the chain active. This value is generally larger than block_time.
YAML:
node:
lazy_block_interval: "30s"Command-line Flag:--evnode.node.lazy_block_interval <duration>Example: --evnode.node.lazy_block_interval 2mDefault: "1m"Constant: FlagLazyBlockTime
Scrape Interval
Description: The interval at which the reaper polls the execution layer for new transactions. Lower values reduce transaction detection latency but increase RPC load on the execution client.
YAML:
node:
scrape_interval: "1s"Command-line Flag:--evnode.node.scrape_interval <duration>Example: --evnode.node.scrape_interval 500msDefault: "1s"Constant: FlagScrapeInterval
Catchup Timeout
Description: When set to a non-zero duration, the aggregator syncs from DA and P2P before producing blocks. The value specifies how long to wait for P2P catchup after DA sync completes. Requires aggregator mode. Mutually exclusive with Raft consensus.
YAML:
node:
catchup_timeout: "30s"Command-line Flag:--evnode.node.catchup_timeout <duration>Example: --evnode.node.catchup_timeout 1mDefault: "0s" (disabled) Constant: FlagCatchupTimeout
Readiness Window Seconds
Description: The time window in seconds used to calculate how many blocks behind the node can be and still be considered ready. The actual block threshold is derived by dividing this window by the block time. Default is 15 seconds.
YAML:
node:
readiness_window_seconds: 15Command-line Flag:--evnode.node.readiness_window_seconds <uint64>Example: --evnode.node.readiness_window_seconds 30Default: 15Constant: FlagReadinessWindowSeconds
Readiness Max Blocks Behind
Description: Explicit override for how many blocks behind best-known head the node can be and still be considered ready. When set to 0, the value is calculated automatically from readiness_window_seconds and the block time. Override this to set an absolute block count instead of a time-based window.
YAML:
node:
readiness_max_blocks_behind: 15Command-line Flag:--evnode.node.readiness_max_blocks_behind <uint64>Example: --evnode.node.readiness_max_blocks_behind 30Default: 0 (calculated from readiness_window_seconds) Constant: FlagReadinessMaxBlocksBehind
Pruning Configuration (pruning)
Description: Controls automatic pruning of stored block data and metadata from the local store. Pruning helps manage disk space by periodically removing old blocks and their associated state, while keeping a recent window of history for validation and queries.
Pruning Modes:
disabled(default): Archive mode - keeps all blocks and metadata indefinitelymetadata: Prunes only state metadata (execution state snapshots), keeps all blocksall: Prunes both blocks (headers, data, signatures) and metadata
How Pruning Works:
When pruning is enabled, the pruner runs at the configured interval and removes data beyond the retention window (pruning_keep_recent). The system uses intelligent batching to avoid overwhelming the node:
- Batch sizes are automatically calculated based on your
pruning_intervalandblock_time - Catch-up mode: When first enabling pruning on an existing node, smaller batches (2× blocks per interval) are used to gradually catch up without impacting performance
- Normal mode: Once caught up, larger batches (4× blocks per interval) are used for efficient maintenance
- Progress tracking: Pruning progress is saved after each batch, so restarts don't lose progress
Batch Size Examples:
With default settings (15 minute interval, 1 second blocks):
- Catch-up: ~1,800 blocks per run
- Normal: ~3,600 blocks per run
With high-throughput chain (15 minute interval, 100ms blocks):
- Catch-up: ~18,000 blocks per run
- Normal: ~36,000 blocks per run
YAML:
pruning:
pruning_mode: "all"
pruning_keep_recent: 100000
pruning_interval: "15m"Command-line Flags:
--evnode.pruning.pruning_mode <string>- Description: Pruning mode: 'disabled' (keep all), 'metadata' (prune state only), or 'all' (prune blocks and state)
- Example:
--evnode.pruning.pruning_mode all - Default:
"disabled"
--evnode.pruning.pruning_keep_recent <uint64>- Description: Number of most recent blocks/metadata to retain when pruning is enabled. Must be > 0 when pruning is enabled.
- Example:
--evnode.pruning.pruning_keep_recent 100000 - Default:
0
--evnode.pruning.pruning_interval <duration>- Description: How often to run the pruning process. Must be >= block_time when pruning is enabled. Larger intervals allow larger batch sizes.
- Example:
--evnode.pruning.pruning_interval 15m - Default:
0(disabled)
Constants: FlagPruningMode, FlagPruningKeepRecent, FlagPruningInterval
Important Notes:
- When DA is enabled (DA address is configured), pruning only removes blocks that have been confirmed on the DA layer (for mode
all) to ensure data safety - When DA is not enabled (no DA address configured), pruning proceeds based solely on store height, allowing nodes without DA to manage disk space
- The first pruning run after enabling may take several cycles to catch up, processing data in smaller batches
- Pruning cannot be undone - ensure your retention window is sufficient for your use case
- For production deployments, consider keeping at least 100,000 recent blocks
- The pruning interval should be balanced with your disk space growth rate
Data Availability Configuration (da)
Parameters for connecting and interacting with the Data Availability (DA) layer, which Evolve uses to publish block data.
YAML Section:
da:
# ... DA configurations ...DA Service Address
Description: The network address (host:port) of the Data Availability layer service. Evolve connects to this endpoint to submit and retrieve block data.
YAML:
da:
address: "localhost:26659"Command-line Flag:--evnode.da.address <string>Example: --evnode.da.address 192.168.1.100:26659Default: "http://localhost:7980"Constant: FlagDAAddress
DA Authentication Token
Description: The authentication token required to interact with the DA layer service, if the service mandates authentication.
YAML:
da:
auth_token: "YOUR_DA_AUTH_TOKEN"Command-line Flag:--evnode.da.auth_token <string>Example: --evnode.da.auth_token mysecrettokenDefault: "" (empty) Constant: FlagDAAuthToken
DA Submit Options
Description: Additional options passed to the DA layer when submitting data. The format and meaning of these options depend on the specific DA implementation being used. For example, with Celestia, this can include custom gas settings or other submission parameters in JSON format.
Note: If you configure multiple signing addresses (see DA Signing Addresses), the selected signing address will be automatically merged into these options as a JSON field signer_address (matching Celestia's TxConfig schema). If the base options are already valid JSON, the signing address is added to the existing object; otherwise, a new JSON object is created.
YAML:
da:
submit_options: '{"key":"value"}' # Example, format depends on DA layerCommand-line Flag:--evnode.da.submit_options <string>Example: --evnode.da.submit_options '{"custom_param":true}'Default: "" (empty) Constant: FlagDASubmitOptions
DA Signing Addresses
Description: A comma-separated list of signing addresses to use for DA blob submissions. When multiple addresses are provided, they will be used in round-robin fashion to prevent sequence mismatches that can occur with high-throughput Cosmos SDK-based DA layers. This is particularly useful for Celestia when submitting many transactions concurrently.
Each submission will select the next address in the list, and that address will be automatically added to the submit_options as signer_address. This ensures that the DA layer (e.g., celestia-node) uses the specified account for signing that particular blob submission.
Setup Requirements:
- All addresses must be loaded into the DA node's keyring and have sufficient funds for transaction fees
- For Celestia, see the guide on setting up multiple accounts in the DA node documentation
YAML:
da:
signing_addresses:
- "celestia1abc123..."
- "celestia1def456..."
- "celestia1ghi789..."Command-line Flag:--evnode.da.signing_addresses <string>Example: --evnode.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi...Default: [] (empty, uses default DA node behavior) Constant: FlagDASigningAddresses
Behavior:
- If no signing addresses are configured, submissions use the DA layer's default signing behavior
- If one address is configured, all submissions use that address
- If multiple addresses are configured, they are used in round-robin order to distribute the load and prevent nonce/sequence conflicts
- The address selection is thread-safe for concurrent submissions
DA Max Submit Attempts
Description: The maximum number of attempts to submit data to the DA layer before giving up. Higher values provide more resilience against transient DA failures but can delay error reporting.
YAML:
da:
max_submit_attempts: 30Command-line Flag:--evnode.da.max_submit_attempts <int>Example: --evnode.da.max_submit_attempts 10Default: 30Constant: FlagDAMaxSubmitAttempts
DA Namespace
Description: The namespace ID used when submitting blobs (block data) to the DA layer. This helps segregate data from different chains or applications on a shared DA layer.
Note: If only namespace is provided, it will be used for both headers and data, otherwise the data_namespace will be used for data. Doing so allows speeding up light clients.
YAML:
da:
namespace: "MY_UNIQUE_NAMESPACE_ID"Command-line Flag:--evnode.da.namespace <string>Example: --evnode.da.namespace 0x1234567890abcdefDefault: randomly generated at startup Constant: FlagDANamespace
DA Data Namespace
Description: The namespace ID specifically for submitting transaction data to the DA layer. Transaction data is submitted separately from headers, enabling nodes to sync only the data they need. The namespace value is encoded by the node to ensure proper formatting and compatibility with the DA layer.
YAML:
da:
data_namespace: "DATA_NAMESPACE_ID"Command-line Flag:--evnode.da.data_namespace <string>Example: --evnode.da.data_namespace my_data_namespaceDefault: "" (falls back to namespace if not set) Constant: FlagDADataNamespace
DA Forced Inclusion Namespace
Description: The namespace ID used for forced inclusion transactions on the DA layer. When set, the based sequencer will fetch transactions from this namespace. Required when running in based sequencer mode.
YAML:
da:
forced_inclusion_namespace: "FORCED_INCLUSION_NAMESPACE_ID"Command-line Flag:--evnode.da.forced_inclusion_namespace <string>Example: --evnode.da.forced_inclusion_namespace 0xabcdef1234567890Default: "" (empty) Constant: FlagDAForcedInclusionNamespace
DA Block Time
Description: The average block time of the Data Availability chain (specified as a duration string, e.g., "15s", "1m"). This value influences:
- The frequency of DA layer syncing.
- The maximum backoff time for retrying DA submissions.
- Calculation of transaction expiration when multiplied by
mempool_ttl.
YAML:
da:
block_time: "6s"Command-line Flag:--evnode.da.block_time <duration>Example: --evnode.da.block_time 12sDefault: "6s"Constant: FlagDABlockTime
DA Mempool TTL
Description: The number of DA blocks after which a transaction submitted to the DA layer is considered expired and potentially dropped from the DA layer's mempool. This also controls the retry backoff timing for DA submissions.
YAML:
da:
mempool_ttl: 20Command-line Flag:--evnode.da.mempool_ttl <uint64>Example: --evnode.da.mempool_ttl 30Default: 0Constant: FlagDAMempoolTTL
DA Request Timeout
Description: Per-request timeout applied to DA GetIDs and Get RPC calls while retrieving blobs. Increase this value if your DA endpoint has high latency to avoid premature failures; decrease it to make the syncer fail fast and free resources sooner when the DA node becomes unresponsive.
YAML:
da:
request_timeout: "30s"Command-line Flag:--evnode.da.request_timeout <duration>Example: --evnode.da.request_timeout 45sDefault: "1m"Constant: FlagDARequestTimeout
DA Batching Strategy
Description: Controls how blocks are batched before submission to the DA layer. Different strategies offer trade-offs between latency, cost efficiency, and throughput. All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit.
Available strategies:
immediate: Submits as soon as any items are available. Best for low-latency requirements where cost is not a concern.size: Waits until the batch reaches a size threshold (fraction of max blob size). Best for maximizing blob utilization and minimizing costs when latency is flexible.time: Waits for a time interval before submitting. Provides predictable submission timing aligned with DA block times.adaptive: Balances between size and time constraints—submits when either the size threshold is reached OR the max delay expires. Recommended for most production deployments as it optimizes both cost and latency.
YAML:
da:
batching_strategy: "time"Command-line Flag:--evnode.da.batching_strategy <string>Example: --evnode.da.batching_strategy adaptiveDefault: "time"Constant: FlagDABatchingStrategy
DA Batch Size Threshold
Description: The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the size and adaptive strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.
Higher values maximize blob utilization and reduce costs but may increase latency. Lower values reduce latency but may result in less efficient blob usage.
YAML:
da:
batch_size_threshold: 0.8Command-line Flag:--evnode.da.batch_size_threshold <float64>Example: --evnode.da.batch_size_threshold 0.9Default: 0.8 (80% of max blob size) Constant: FlagDABatchSizeThreshold
DA Batch Max Delay
Description: The maximum time to wait before submitting a batch regardless of size. Applies to the time and adaptive strategies. Lower values reduce latency but may increase costs due to smaller batches. This value is typically aligned with the DA chain's block time to ensure submissions land in consecutive blocks.
When set to 0, defaults to the DA BlockTime value.
YAML:
da:
batch_max_delay: "6s"Command-line Flag:--evnode.da.batch_max_delay <duration>Example: --evnode.da.batch_max_delay 12sDefault: 0 (uses DA BlockTime) Constant: FlagDABatchMaxDelay
DA Batch Min Items
Description: The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum.
YAML:
da:
batch_min_items: 1Command-line Flag:--evnode.da.batch_min_items <uint64>Example: --evnode.da.batch_min_items 5Default: 1Constant: FlagDABatchMinItems
P2P Configuration (p2p)
Settings for peer-to-peer networking, enabling nodes to discover each other, exchange blocks, and share transactions.
YAML Section:
p2p:
# ... P2P configurations ...P2P Listen Address
Description: The network address (host:port) on which the Evolve node will listen for incoming P2P connections from other nodes.
YAML:
p2p:
listen_address: "0.0.0.0:7676"Command-line Flag:--evnode.p2p.listen_address <string>Example: --evnode.p2p.listen_address /ip4/127.0.0.1/tcp/26656Default: "/ip4/0.0.0.0/tcp/7676"Constant: FlagP2PListenAddress
P2P Peers
Description: A comma-separated list of peer addresses (e.g., multiaddresses) that the node will attempt to connect to for bootstrapping its P2P connections. These are often referred to as seed nodes.
For DA-only sync mode: Leave this field empty (default) to disable P2P networking entirely. When no peers are configured, the node will sync exclusively from the Data Availability layer without participating in P2P gossip, peer discovery, or block sharing. This is useful for nodes that only need to follow the canonical chain data from DA.
YAML:
p2p:
peers: "/ip4/some_peer_ip/tcp/7676/p2p/PEER_ID1,/ip4/another_peer_ip/tcp/7676/p2p/PEER_ID2"
# For DA-only sync, leave peers empty:
# peers: ""Command-line Flag:--evnode.p2p.peers <string>Example: --evnode.p2p.peers /dns4/seed.example.com/tcp/26656/p2p/12D3KooW...Default: "" (empty - enables DA-only sync mode) Constant: FlagP2PPeers
P2P Blocked Peers
Description: A comma-separated list of peer IDs that the node should block from connecting. This can be used to prevent connections from known malicious or problematic peers.
YAML:
p2p:
blocked_peers: "PEER_ID_TO_BLOCK1,PEER_ID_TO_BLOCK2"Command-line Flag:--evnode.p2p.blocked_peers <string>Example: --evnode.p2p.blocked_peers 12D3KooW...,12D3KooX...Default: "" (empty) Constant: FlagP2PBlockedPeers
P2P Allowed Peers
Description: A comma-separated list of peer IDs that the node should exclusively allow connections from. If this list is non-empty, only peers in this list will be able to connect.
YAML:
p2p:
allowed_peers: "PEER_ID_TO_ALLOW1,PEER_ID_TO_ALLOW2"Command-line Flag:--evnode.p2p.allowed_peers <string>Example: --evnode.p2p.allowed_peers 12D3KooY...,12D3KooZ...Default: "" (empty, allow all unless blocked) Constant: FlagP2PAllowedPeers
RPC Configuration (rpc)
Settings for the Remote Procedure Call (RPC) server, which allows clients and applications to interact with the Evolve node.
YAML Section:
rpc:
# ... RPC configurations ...RPC Server Address
Description: The network address (host:port) to which the RPC server will bind and listen for incoming requests.
YAML:
rpc:
address: "127.0.0.1:7331"Command-line Flag:--evnode.rpc.address <string>Example: --evnode.rpc.address 0.0.0.0:26657Default: "127.0.0.1:7331"Constant: FlagRPCAddress
Enable DA Visualization
Description: If true, enables the Data Availability (DA) visualization endpoints that provide real-time monitoring of blob submissions to the DA layer. This includes a web-based dashboard and REST API endpoints for tracking submission statistics, monitoring DA health, and analyzing blob details. Only aggregator nodes submit data to the DA layer, so this feature is most useful when running in aggregator mode.
YAML:
rpc:
enable_da_visualization: trueCommand-line Flag:--evnode.rpc.enable_da_visualization (boolean, presence enables it) Example: --evnode.rpc.enable_da_visualizationDefault: falseConstant: FlagRPCEnableDAVisualization
See the DA Visualizer Guide for detailed information on using this feature.
Health Endpoints
/health/live
Returns 200 OK if the process is alive and can access the store.
curl http://localhost:7331/health/live/health/ready
Returns 200 OK if the node can serve correct data. Checks:
- P2P is listening (if enabled)
- Has synced blocks
- Not too far behind network
- Non-aggregators: has peers
- Aggregators: producing blocks at expected rate
curl http://localhost:7331/health/readyConfigure these via readiness_window_seconds and readiness_max_blocks_behind in the node configuration.
Instrumentation Configuration (instrumentation)
Settings for enabling and configuring metrics and profiling endpoints, useful for monitoring node performance and debugging.
YAML Section:
instrumentation:
# ... instrumentation configurations ...Enable Prometheus Metrics
Description: If true, enables the Prometheus metrics endpoint, allowing Prometheus to scrape operational data from the Evolve node.
YAML:
instrumentation:
prometheus: trueCommand-line Flag:--evnode.instrumentation.prometheus (boolean, presence enables it) Example: --evnode.instrumentation.prometheusDefault: falseConstant: FlagPrometheus
Prometheus Listen Address
Description: The network address (host:port) where the Prometheus metrics server will listen for scraping requests.
See Metrics for more details on what metrics are exposed.
YAML:
instrumentation:
prometheus_listen_addr: ":2112"Command-line Flag:--evnode.instrumentation.prometheus_listen_addr <string>Example: --evnode.instrumentation.prometheus_listen_addr 0.0.0.0:9090Default: ":26660"Constant: FlagPrometheusListenAddr
Maximum Open Connections
Description: The maximum number of simultaneous connections allowed for the metrics server (e.g., Prometheus endpoint).
YAML:
instrumentation:
max_open_connections: 100Command-line Flag:--evnode.instrumentation.max_open_connections <int>Example: --evnode.instrumentation.max_open_connections 50Default: 3Constant: FlagMaxOpenConnections
Enable Pprof Profiling
Description: If true, enables the pprof HTTP endpoint, which provides runtime profiling data for debugging performance issues. Accessing these endpoints can help diagnose CPU and memory usage.
YAML:
instrumentation:
pprof: trueCommand-line Flag:--evnode.instrumentation.pprof (boolean, presence enables it) Example: --evnode.instrumentation.pprofDefault: falseConstant: FlagPprof
Pprof Listen Address
Description: The network address (host:port) where the pprof HTTP server will listen for profiling requests.
YAML:
instrumentation:
pprof_listen_addr: "localhost:6060"Command-line Flag:--evnode.instrumentation.pprof_listen_addr <string>Example: --evnode.instrumentation.pprof_listen_addr 0.0.0.0:6061Default: ":6060"Constant: FlagPprofListenAddr
Enable Tracing
Description: If true, enables OpenTelemetry tracing. Traces are exported via OTLP to the configured endpoint.
YAML:
instrumentation:
tracing: trueCommand-line Flag:--evnode.instrumentation.tracing (boolean, presence enables it) Example: --evnode.instrumentation.tracingDefault: falseConstant: FlagTracing
Tracing Endpoint
Description: The OTLP endpoint (host:port) to which traces are exported. Must be set when tracing is enabled.
YAML:
instrumentation:
tracing_endpoint: "localhost:4318"Command-line Flag:--evnode.instrumentation.tracing_endpoint <string>Example: --evnode.instrumentation.tracing_endpoint otel-collector:4318Default: "localhost:4318"Constant: FlagTracingEndpoint
Tracing Service Name
Description: The service.name resource attribute attached to all traces exported by this node. Use this to identify the node in your tracing backend.
YAML:
instrumentation:
tracing_service_name: "ev-node"Command-line Flag:--evnode.instrumentation.tracing_service_name <string>Example: --evnode.instrumentation.tracing_service_name my-rollup-nodeDefault: "ev-node"Constant: FlagTracingServiceName
Tracing Sample Rate
Description: The TraceID ratio-based sampling rate for traces, between 0.0 and 1.0. A value of 1.0 samples all traces; 0.1 samples 10%.
YAML:
instrumentation:
tracing_sample_rate: 0.1Command-line Flag:--evnode.instrumentation.tracing_sample_rate <float64>Example: --evnode.instrumentation.tracing_sample_rate 0.5Default: 0.1Constant: FlagTracingSampleRate
Logging Configuration (log)
Settings that control the verbosity and format of log output from the Evolve node. These are typically set via global flags.
YAML Section:
log:
# ... logging configurations ...Log Level
Description: Sets the minimum severity level for log messages to be displayed. Common levels include debug, info, warn, error.
YAML:
log:
level: "info"Command-line Flag:--log.level <string> (Note: some applications might use a different flag name like --log_level) Example: --log.level debugDefault: "info"Constant: FlagLogLevel (value: "evolve.log.level", but often overridden by global app flags)
Log Format
Description: Sets the format for log output. Common formats include text (human-readable) and json (structured, machine-readable).
YAML:
log:
format: "text"Command-line Flag:--log.format <string> (Note: some applications might use a different flag name like --log_format) Example: --log.format jsonDefault: "text"Constant: FlagLogFormat (value: "evolve.log.format", but often overridden by global app flags)
Log Trace (Stack Traces)
Description: If true, enables the inclusion of stack traces in error logs. This can be very helpful for debugging issues by showing the call stack at the point of an error.
YAML:
log:
trace: falseCommand-line Flag:--log.trace (boolean, presence enables it; Note: some applications might use a different flag name like --log_trace) Example: --log.traceDefault: falseConstant: FlagLogTrace (value: "evolve.log.trace", but often overridden by global app flags)
Signer Configuration (signer)
Settings related to the signing mechanism used by the node, particularly for aggregators that need to sign blocks.
YAML Section:
signer:
# ... signer configurations ...Signer Type
Description: Specifies the type of remote signer to use. Common options might include file (for key files) or grpc (for connecting to a remote signing service).
YAML:
signer:
signer_type: "file"Command-line Flag:--evnode.signer.signer_type <string>Example: --evnode.signer.signer_type grpcDefault: "file"Constant: FlagSignerType
Signer Path
Description: The path to the signer file (if signer_type is file) or the address of the remote signer service (if signer_type is grpc or similar).
YAML:
signer:
signer_path: "/path/to/priv_validator_key.json" # For file signer
# signer_path: "localhost:9000" # For gRPC signerCommand-line Flag:--evnode.signer.signer_path <string>Example: --evnode.signer.signer_path ./configDefault: "config"Constant: FlagSignerPath
Signer Passphrase File
Description: Path to a file containing the passphrase for the signer key. Required when using a file signer in aggregator mode. Reading the passphrase from a file avoids exposing it in shell history.
YAML: This is not stored in the YAML file. Provide it via flag or environment variable.
Command-line Flag:--evnode.signer.passphrase_file <string>Example: --evnode.signer.passphrase_file /run/secrets/signer_passphraseDefault: "" (empty) Constant: FlagSignerPassphraseFile
Raft Configuration (raft)
Settings for Raft-based consensus used for leader election and state replication in multi-aggregator deployments. All fields are ignored when raft.enable is false.
Note: Raft consensus and catchup_timeout are mutually exclusive. Enabling both will produce a validation error.
YAML Section:
raft:
# ... raft configurations ...Enable Raft
Description: If true, enables Raft consensus for leader election and state replication across multiple aggregator nodes.
YAML:
raft:
enable: trueCommand-line Flag:--evnode.raft.enable (boolean, presence enables it) Default: falseConstant: FlagRaftEnable
Raft Node ID
Description: Unique identifier for this node within the Raft cluster. Required when Raft is enabled.
YAML:
raft:
node_id: "node1"Command-line Flag:--evnode.raft.node_id <string>Default: "" (required when enabled) Constant: FlagRaftNodeID
Raft Address
Description: Network address (host:port) for Raft inter-node communication. Required when Raft is enabled.
YAML:
raft:
raft_addr: "0.0.0.0:7000"Command-line Flag:--evnode.raft.raft_addr <string>Default: "" (required when enabled) Constant: FlagRaftAddr
Raft Directory
Description: Directory for storing Raft logs and snapshots. Required when Raft is enabled.
YAML:
raft:
raft_dir: "/home/user/.evnode/raft"Command-line Flag:--evnode.raft.raft_dir <string>Default: "<home>/raft"Constant: FlagRaftDir
Raft Bootstrap
Description: If true, bootstraps a new Raft cluster. Only set this on the very first node when initializing a new cluster.
YAML:
raft:
bootstrap: trueCommand-line Flag:--evnode.raft.bootstrap (boolean, presence enables it) Default: falseConstant: FlagRaftBootstrap
Raft Peers
Description: Comma-separated list of peer Raft addresses in nodeID@host:port format.
YAML:
raft:
peers: "node2@192.168.1.2:7000,node3@192.168.1.3:7000"Command-line Flag:--evnode.raft.peers <string>Default: "" (empty) Constant: FlagRaftPeers
Raft Snap Count
Description: Number of log entries between Raft snapshots. Lower values reduce recovery time but increase snapshot I/O overhead.
YAML:
raft:
snap_count: 10000Command-line Flag:--evnode.raft.snap_count <uint64>Default: 0Constant: FlagRaftSnapCount
Raft Send Timeout
Description: Maximum duration to wait for a message to be sent to a peer before considering it failed.
YAML:
raft:
send_timeout: "200ms"Command-line Flag:--evnode.raft.send_timeout <duration>Default: "200ms"Constant: FlagRaftSendTimeout
Raft Heartbeat Timeout
Description: Time between leader heartbeats sent to followers.
YAML:
raft:
heartbeat_timeout: "350ms"Command-line Flag:--evnode.raft.heartbeat_timeout <duration>Default: "350ms"Constant: FlagRaftHeartbeatTimeout
Raft Leader Lease Timeout
Description: Duration of the leader lease, which allows a leader to serve reads locally without round-tripping to followers.
YAML:
raft:
leader_lease_timeout: "175ms"Command-line Flag:--evnode.raft.leader_lease_timeout <duration>Default: "175ms"Constant: FlagRaftLeaderLeaseTimeout
This reference should help you configure your Evolve node effectively. Always refer to the specific version of Evolve you are using, as options and defaults may change over time.
