Cardano Wallet
Cardano Wallet helps you manage your Ada. You can use it to send and receive payments on the Cardano blockchain.
This project provides an HTTP Application Programming Interface (API) and command-line interface (CLI) for working with your wallet.
It can be used as a component of a frontend such as Daedalus, which provides a friendly user interface for wallets. Most users who would like to use Cardano should start with Daedalus.
User Manual
Information about using the wallet.
This includes:
- Tutorials on how to perform specific tasks.
- General recommendations.
- Links to further documentation.
When to use cardano-wallet
Cardano-wallet was originally designed to provide the wallet logic for the graphical frontend Daedalus. Cardano-wallet does not offer a graphical interface, but it provides an HTTP API and a command line interface (CLI).
Cardano-wallet is a full-node wallet, which means that it depends on a cardano-node
to provide blockchain data. In turn, this means that cardano-wallet
enjoys the security properties of the Ouroboros consensus protocol. Other wallets such as Eternl, Yoroi or Lace are light wallets, which means that they have to trust a third party to provide blockchain data, but they use less computing resources in return.
Cardano-wallet supports
-
All use cases
- Managing a balance of ADA and tokens
- Submitting transactions to the Cardano network via a local `cardano-node``
-
Personal use
- Staking
- Compatibility with legacy wallets
- Basic privacy with payment addresses creation
-
Business use
- Minting and burning tokens
- Multi-party signatures
We also accomodate
-
Cryptocurrency Exchanges
Please reach out to us if you feel that we could do a better job of covering your use case. Preferably, tell us more about what you are trying to achieve and the problems you want to solve — then we can help you find out whether cardano-wallet
is the right tool for you, and we can better accomodate your use case in our development roadmap.
Scalability
Computing resources
A single cardano-wallet
process supports muliple wallets, each with separate (sets of) signing keys. These wallets run fairly independently, so that the computing resources used by cardano-wallet
will be the sum of the resources used by each wallet.
Each wallet uses computing resources that grow with the number of addresses and UTxO in the wallet. For precise numbers, see our Hardware Recommendations.
If computing resources on a single machine are not sufficient, you can run multiple cardano-wallet
process with different wallets on separate machines.
Transaction throughput
If you want to submit frequent transactions to the Cardano blockchain, the wallet is typically not a bottleneck — rather, the transactions per seconds that can be accepted by the Cardano blockchain is the limiting factor. That said, transactions per second are not directly relevant to your end-users, see Performance engineering: Lies, damned lies and (TPS) benchmarks for a thorough discussion of performance on Cardano.
In general, note that the more your transactions depend on each other, the less they can be processed in parellel, reducing throughput and increasing latency.
On Cardano mainnet, on average 1 block per 20 slots is made, and one slot lasts 2 seconds (parameters as of July 2023). Each block may contain a maximum number of transactions. Using these quantities, you can estimate an upper bound on the number of your transactions per second that the blockchain can accomodate.
If you need more frequent transactions that the estimate above, you need to consider a scaling solution such as Hydra or a sidechain.
How to
Start wallet server
Overview
The easiest and most common way of managing your funds on the Cardano blockchain is through a wallet. This guide is to show how to start cardano-wallet
together with cardano-node
.
Full node mode
Here we are going to start cardano-wallet
in full node mode, meaning that we need to have also cardano-node
running on the same machine. We can get binaries of cardano-wallet
and compatible version of cardano-node
from cardano wallet release page. Cardano-wallet
archives published for each release, besides cardano-wallet
itself, include all the relevant tools like cardano-node
, cardano-cli
, cardano-addresses
or bech32
.
Alternatively one can use handy docker-compose to start wallet and the node on different networks:
> NETWORK=mainnet docker-compose up
> NETWORK=preprod docker-compose up
> NETWORK=preview docker-compose up
Pre-requisites
- Install cardano-wallet from cardano wallet release page.
- Install cardano-node from cardano wallet release page.
- Download up-to-date configuration files from Cardano Book.
Start cardano-wallet
in full node mode
Configuration files for all Cardano networks can be found in Cardano Book.
- Start node:
> cardano-node run \
--config config.json \
--topology topology.json \
--database-path ./db \
--socket-path /path/to/node.socket
- Start wallet:
When starting a wallet instance that targets a testing environment such as preview
or preprod
, we need to provide a byron-genesis.json
file to the wallet:
> cardano-wallet serve --port 8090 \
--node-socket /path/to/node.socket \
--testnet byron-genesis.json \
--database ./wallet-db \
--token-metadata-server https://metadata.cardano-testnet.iohkdev.io
In case of mainnet
we simply replace --testnet byron-genesis.json
with option --mainnet
.
> cardano-wallet serve --port 8090 \
--node-socket /path/to/node.socket \
--mainnet \
--database ./wallet-db \
--token-metadata-server https://tokens.cardano.org
We use different URLs for mainnet and test networks with the --token-metadata-server
option. These URLs point to Cardano Token Registry servers. See
assets
for more information.
That's it! We can basically start managing our wallets from this point onwards. See how-to-create-a-wallet and how-to-manage-wallets
However, in order to be able to make transactions, we still need to wait until cardano-node
is synced fully with the Cardano blockchain. In case of mainnet
it may take several hours, in case of testnet
a bit less.
We can monitor this process using cardano-wallet's
GET /network/information
endpoint. Once the endpoint returns sync_progress
status ready
we'll know we are good to go:
> curl -X GET http://localhost:8090/v2/network/information | jq .sync_progress
{
"status": "ready"
}
How to create a wallet
Pre-requisites
Overview
The easiest and most common way of managing your funds on the Cardano blockchain is through a hierarchical deterministic wallet One can create a wallet using one of the following endpoints of http-api:
POST /wallets
- create Shelley wallet
POST /byron-wallets
- create Byron wallet
POST /shared-wallets
- create Shared wallet
Shelley wallets
Shelley wallets are sequential
wallets recommended for any new applications. In particular they support delegation feature which is not supported by the Byron era wallets.
Exemplary request for creating new Shelley wallet may look as follows:
> curl -X POST http://localhost:8090/v2/wallets \
-d '{"mnemonic_sentence":["slab","praise","suffer","rabbit","during","dream","arch","harvest","culture","book","owner","loud","wool","salon","table","animal","vivid","arrow","dirt","divide","humble","tornado","solution","jungle"],
"passphrase":"Secure Passphrase",
"name":"My Test Wallet",
"address_pool_gap":20}' \
-H "Content-Type: application/json"
Note also that you can have many wallets being operated by a single cardano-wallet
server.
Byron wallets
There are several Byron wallet types available:
- random
- icarus
- trezor
- ledger
The basic difference between them is that for a random
wallet user needs to create addresses manually, whereas for sequential wallets like icarus
, trezor
and ledger
addresses are
created automatically by the wallet.
Please note that
random
wallets are considered deprecated and should not be used by new applications.
See more on hierarchical deterministic wallet and byron-address-format.
Shared wallets
Shared wallets are modern sequential
"Shelley-type wallets". The main idea is that funds within shared wallets can be managed by more than one owner. When creating such a wallet, it's necessary to provide a payment_script_template
that lists all future co-signers and their extended account public keys (for spending operations), as well as a template script primitive that establishes the rules for sharing custody of the wallet's spending operations between the co-signers. Similarly, one can provide a delegation_script_template
for sharing custody of delegation operations.
Exemplary request for creating new Shared wallet may look as follows:
> curl -X POST http://localhost:8090/v2/shared-wallets \
-d '{"mnemonic_sentence":["possible","lizard","zebra","hill","pluck","tourist","page","ticket","amount","fall","purpose","often","chest","fantasy","funny","sense","pig","goat","pet","minor","creek","vacant","swarm","fun"],
"passphrase":"Secure Passphrase",
"name":"My Test Shared Wallet",
"account_index":"0H",
"payment_script_template":{"cosigners":{"cosigner#0":"self"},"template":{"all":["cosigner#0",{"active_from":120}]}},
"delegation_script_template":{"cosigners":{"cosigner#0":"self","cosigner#1":"1423856bc91c49e928f6f30f4e8d665d53eb4ab6028bd0ac971809d514c92db11423856bc91c49e928f6f30f4e8d665d53eb4ab6028bd0ac971809d514c92db2"},"template":{"all":["cosigner#0","cosigner#1",{"active_from":120},{"active_until":300}]}}}' \
-H "Content-Type: application/json"
See more elaborate example on shared wallets.
How to manage wallets
Pre-requisites
Overview
Once you created a wallet you can manage it with cardano-wallet
endpoints. There are several operations available.
Refer to http-api
for the extensive list of all operations. In particular for different wallet types supported by cardano-wallet
:
How to create addresses
Pre-requisites
Overview
Once you have a wallet you can manage your funds. In order to receive a transaction you need to provide an address associated with your wallet to the sender.
Sequential wallets (Icarus, Shelley & Shared)
Since Icarus, wallets use sequential derivation which must satisfy very specific rules: a wallet is not allowed to use addresses beyond a certain limit before previously generated addresses have been used. This means that, at a given point in a time, a wallet has both a minimum and a maximum number of possible unused addresses. By default, the maximum number of consecutive unused addresses is set to 20
.
Therefore, address management is entirely done by the server and users aren't allowed to fiddle with them. The list of available addresses can be fetched from the server at any time via:
-
GET /byron-wallets/{walletId}/addresses
- Icarus wallet addresses -
GET /wallets/{walletId}/addresses
- Shelley wallet addresses -
GET /shared-wallets/{walletId}/addresses
- Shared wallet addresses
This list automatically expands when new addresses become available so that there's always address_pool_gap
consecutive unused addresses available (where address_pool_gap
can be configured when a wallet is first restored / created).
Random wallets (Legacy Byron)
Address creation is only allowed for wallets using random derivation. These are the legacy wallets from cardano-sl.
For random
wallets user needs to invoke the following wallet endpoint to create new addresses:
POST /byron-wallets/{walletId}/addresses
In order to list existing addresses another endpoint can be used.
GET /byron-wallets/{walletId}/addresses
How to make a transaction
Pre-requisites
- how to start a server
- how to create a wallet
- In order to be able to send transactions, our wallet must have funds. In case of
preview
andpreprod
testnets we can request tADA from the faucet.
Old transaction workflow
Assuming you have already created a wallet, you can send a transaction by using the following endpoint:
POST /wallets/{walletId}/transactions
- transaction from Shelley walletPOST /byron-wallets/{walletId}/transactions
- transaction from Byron wallet
Behind the scene, the wallet engine will select necessary inputs from the wallet, generate a change address within the wallet, sign and submit the transaction. A transaction can have multiple outputs, possibly to the same address. Note that in Byron, addresses are necessarily base58-encoded (as an enforced convention).
New transaction workflow
New transaction workflow decouples creation of the transaction into separate steps:
- Construct:
POST /wallets/{walletId}/transactions-construct
- Sign:
POST /wallets/{walletId}/transactions-sign
- Submit:
POST /wallets/{walletId}/transactions-submit
Behind the scene, on the construct step, the wallet engine will select necessary inputs belonging to the wallet, generate a change address of the wallet and calculate the fee
for this particular transaction.
Sign and submit are now invoked as separate steps. This allows for presenting the actual transaction fee
to the end user. In the old workflow, when all those steps where done in one go, showing precise fee was not possible and it had to be estimated using separate wallet endpoint. Because of the random nature of coin-selection algorithm such estimation might not have been always the same as the actual transaction fee.
Here is very basic example sending out 10₳ and 1 asset
from the wallet:
- Construct.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"payments":
[{"address":"addr_test1qrv60y8vwu8cke6j83tgkfjttmtv0ytfvnhggp6f4gl5kf0l0dw5r75vk42mv3ykq8vyjeaanvpytg79xqzymqy5acmq5k85dg",
"amount":{"quantity":10000000,"unit":"lovelace"},
"assets":[{"policy_id":"b518eee977e1c8e3ce020e745be63b8b14498c565f5b59653e104ec7",
"asset_name":"4163757374696330",
"quantity":1}]}]}' \
-H "Content-Type: application/json"
- Sign.
The response from construct will give us information like fee
or coin selection
details. It also returns a CBOR-encoded transaction
represented in base64 encoding. This is what we need to feed into sign's payload.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-sign \
-d '{"passphrase":"Secure Passphrase",
"transaction":"hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBBQ2AAYKCWDkA0Tt2d1mVEvi5ZUp1k6RAHcWWqmNX0+gr0ea9nv97XUH6jLVVtkSWAdhJZ72bAkWjxTAETYCU7jaCGgCYloChWBy1GO7pd+HI484CDnRb5juLFEmMVl9bWWU+EE7HoUhBY3VzdGljMAGCWDkAILE1lnWHTaOk26BI/mHKGcjdgw9DcIsWT4W0YxcKHXESwr9eLTleQaAYVejg2GktTDXVp7ygo4CCGrYB1PChWBy1GO7pd+HI484CDnRb5juLFEmMVl9bWWU+EE7Hp0hBY3VzdGljMQFIQWN1c3RpYzIBSEFjdXN0aWM0AURCYXNzBUVEcnVtcwZJRHJ1bXNPbmx5AUhFbGVjdHJpYwYCGgAC2VUDGgMDcO8OgKD19g=="}' \
-H "Content-Type: application/json"
- Submit.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-submit \
-d '{"transaction":"hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBBQ2AAYKCWDkA0Tt2d1mVEvi5ZUp1k6RAHcWWqmNX0+gr0ea9nv97XUH6jLVVtkSWAdhJZ72bAkWjxTAETYCU7jaCGgCYloChWBy1GO7pd+HI484CDnRb5juLFEmMVl9bWWU+EE7HoUhBY3VzdGljMAGCWDkAILE1lnWHTaOk26BI/mHKGcjdgw9DcIsWT4W0YxcKHXESwr9eLTleQaAYVejg2GktTDXVp7ygo4CCGrYB1PChWBy1GO7pd+HI484CDnRb5juLFEmMVl9bWWU+EE7Hp0hBY3VzdGljMQFIQWN1c3RpYzIBSEFjdXN0aWM0AURCYXNzBUVEcnVtcwZJRHJ1bXNPbmx5AUhFbGVjdHJpYwYCGgAC2VUDGgMDcO8OgKEAgYJYIASQMgPsYJvlhj+L/ttWXivY8xL/Pzun5qalDwy+pVTpWECQQJAildc3KiO1u86KTH+qSg45K7/wckT4KPE21a819POFIf15NVDY9tsGAkT9uCBRyJ13m0h01ZsA9TWVso8J9fY="}' \
-H "Content-Type: application/json"
{
"id": "35dd9db1f61822ac82f4690ea4fe12426bf6e534aff8e13563fce55a4d502772"
}
We can monitor status of submitted transaction using GET /wallets/{walletId}/transactions/{transactionId} endpoint.
> curl -X GET http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions/35dd9db1f61822ac82f4690ea4fe12426bf6e534aff8e13563fce55a4d502772 | jq
{
"inserted_at": {
"height": {
"quantity": 2975821,
"unit": "block"
},
"time": "2021-10-08T11:25:32Z",
"epoch_number": 161,
"absolute_slot_number": 39323116,
"slot_number": 140716
},
"status": "in_ledger",
....
Transaction history
Note that all transactions made from and to any wallet are available in the transaction history. We can always display details of a particular transaction as well as list all transactions that are known to a wallet.
For instance, transactions can be tracked via:
GET /wallets/{walletId}/transactions
- transactions from Shelley walletGET /byron-wallets/{walletId}/transactions
- transactions from Byron wallet
Which returns a list of all transactions for this particular wallet. Optional range filters can be provided. A transaction will go through a succession of states, starting as “Pending”. If a transaction stays pending for too long (because rejected by a mempool, or because lost in translation due to multiple chain switches), users may decide to forget it using:
DELETE /wallets/{walletId}/transactions/{transactionId}
- transactions from Shelley walletDELETE /byron-wallets/{walletId}/transactions/{transactionId}
- transactions from Byron wallet
For more information about transactions lifecycle, have a look at transaction lifecycle.
Signed and serialized transactions
Alternatively, cardano-wallet
allows clients to submit already signed and serialized transactions as a raw bytes blob. This can be done by submitting such serialized data as an application/octet-stream
:
In this scenario, the server engine will verify that the transaction is structurally well-formed and forward it to the node instance associated with it. If the transaction belongs to a known wallet, it will eventually show up in the wallet.
Assets
Pre-requisites
- how to start a server
- how to create a wallet
- In order to be able to send transactions, our wallet must have funds. In case of
preview
andpreprod
testnets we can request tADA from the faucet.
Overview
The Cardano Blockchain features a unique ability to create (mint), interact (send and receive) and destroy (burn) custom tokens (or so-called 'assets') in a native way. Native, in this case, means besides sending and receiving the official currency ada, you can also interact with custom assets out of the box - without the need for smart contracts.
Assets on wallet balance
You can easily check what assets you have on your wallet balance. Just look for assets
in the response of GET /wallets/{walletId}.
> curl -X GET http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d | jq .assets
{
"total": [
{
"asset_name": "45524358",
"quantity": 1777134804,
"policy_id": "36e93afe19e46227069520040012411e17839f219b549b7f5ac83b68"
},
{
"asset_name": "4d494c4b5348414b4530",
"quantity": 1,
"policy_id": "dd043a63daf194065ca8c8f041337d8e75a08d8f6c469ddc1743d2f3"
}
],
"available": [
{
"asset_name": "45524358",
"quantity": 1777134804,
"policy_id": "36e93afe19e46227069520040012411e17839f219b549b7f5ac83b68"
},
{
"asset_name": "4d494c4b5348414b4530",
"quantity": 1,
"policy_id": "dd043a63daf194065ca8c8f041337d8e75a08d8f6c469ddc1743d2f3"
}
]
}
Listing assets
You can also list assets that were ever associated with the wallet.
-
GET /wallets/{walletId}/assets - list all
-
GET /wallets/{walletId}/assets/{policyId} - get particular asset details by its
policy_id
-
GET /wallets/{walletId}/assets/{policyId}/{assetName} - get particular asset details by
policy_id
andasset_name
Assets off-chain metadata
Issuers of native assets may submit off-chain metadata relating to those assets to the Cardano Token Registry. There are separate instances of the token registry for mainnet
and for each test network (e.g. preview
or preprod
). Metadata submitted to the registry will be then served via the corresponding server.
Mainnet | Testnets | |
---|---|---|
Repository | https://github.com/cardano-foundation/cardano-token-registry | https://github.com/input-output-hk/metadata-registry-testnet |
Server | https://tokens.cardano.org | https://metadata.cardano-testnet.iohkdev.io |
For example on preview
or preprod
that would be:
> cardano-wallet serve --port 8090 \
--node-socket /path/to/node.socket \
--testnet byron-genesis.json \
--database ./wallet-db \
--token-metadata-server https://metadata.world.dev.cardano.org/
Then, if you list assets associated with your wallet you will get their available metadata.
For instance:
> curl -X GET http://localhost:8090/v2/wallets/1f82e83772b7579fc0854bd13db6a9cce21ccd95/assets/919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149/4861707079436f696e
{
"fingerprint": "asset1v96rhc76ke22mx8sulm4v3qhdcmtym7f4m66z2",
"asset_name": "4861707079436f696e",
"policy_id": "919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149",
"metadata": {
"url": "https://happy.io",
"name": "HappyCoin",
"decimals": 6,
"ticker": "HAPP",
"description": "Coin with asset name - and everyone is happy!!!"
}
}
Minting and burning assets
Minting and burning assets is also available using cardano-cli
. See more in https://developers.cardano.org/docs/native-tokens/minting.
Minting and burning of assets is available in the cardano-wallet
in new transaction workflow. One can use wallet's key to mint any tokens that are guarded by native policy script. In practice it is just as simple as constructing a transaction that has a mint_burn
field, then signing and submitting it to the network.
As an example on how to mint and burn assets we will try to mint (and then burn) an NFT with CIP-25 metadata using cardano-wallet
.
Please note that you would mint and burn other assets pretty much the same way. In our example we will be additionally adding on-chain metadata to our minting transaction as we want our NFT to be CIP-25 compliant. However this step is not required for minting. For instance, you could mint some assets and add off-chain metadata for them (CIP26) in the Cardano Token Registry.
Minting an NFT
Let's see how can we mint an NFT with CIP-25 metadata using cardano-wallet
.
Policy key
Before we attempt any minting/burning transaction we need to make sure our wallet is set up with a policy key. We can check it using GET /wallets/{walletId}/policy-key
:
> curl -X GET http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/policy-key
"policy_vk12d0gdel9u6px8wf3uv4z6m4h447n9qsad24gztaku8dzzdqfajzqfm3rr0"
Looks good.
Otherwise we get missing_policy_public_key
error:
> curl -X GET http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/policy-key
{
"code": "missing_policy_public_key",
"message": "It seems the wallet lacks a policy public key. Therefore it's not possible to create a minting/burning transaction or get a policy id. Please first POST to endpoint /wallets/{walletId}/policy-key to set a policy key."
}
In such a case we just need to make a POST
request to POST /wallets/{walletId}/policy-key
as suggested in the error message.
> curl -X POST http://localhost:8091/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/policy-key \
-d '{"passphrase":"Secure Passphrase"}' \
-H "Content-Type: application/json"
"policy_vk12d0gdel9u6px8wf3uv4z6m4h447n9qsad24gztaku8dzzdqfajzqfm3rr0"
Once we have finished, we can proceed to minting NFTs from our wallet!
CIP-25 metadata
We will be attaching CIP-25 metadata to our minting transaction, so there are few things that need to be sorted out. Most basic metadata can look as follows:
{
"721": {
"<POLICY_ID>": {
"<ASSET_NAME>": {
"name": "My amazing NFT",
"image": "ipfs://<IPFS_ID>"
}
}
}
}
As we can see we need to figure out <POLICY_ID>
, <ASSET_NAME>
and <IPFS_ID>
.
Policy ID
Policy id is basically a hash of the native script that guards minting operation. In case of Shelley wallets we can only sign with one key, a wallet policy key, but because of the fact that we can embed it into a native script we can practically have unlimited amount of policy ids from one wallet. These are examples of native scripts templates and each of them will produce different policy id.
cosigner#0
{ "all": [ "cosigner#0" ] }
{ "any": [ "cosigner#0" ] }
{ "some": {"at_least": 1, "from": [ "cosigner#0" ]} }
{ "all":
[ "cosigner#0",
{ "active_from": 120 }
]
}
{ "all":
[ "cosigner#0",
{ "active_until": 1200000 }
]
}
cosigner#0
stands for our wallet's policy key. In case of Shelley wallet we have only one. In the future, in the Shared wallets, we'll be able to construct a minting/burning script with many policy keys shared between different users and they will be identified as cosigner#1
, cosigner#2
...
Let's create most basic policy id using POST /wallets/{walletId}/policy-id
endpoint:
curl -X POST http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/policy-id \
-d '{"policy_script_template":"cosigner#0"}' \
-H "Content-Type: application/json" | jq
{
"policy_id": "6d5052088183db1ef06439a9f501b52721c2645532a50254a69d5390"
}
Asset name
The asset name acts as a sub-identifier within a given policy. Although we call it "asset name", the value needn't be text, and it could even be empty. In case of CIP-25 however we can use a plain text. Let our asset name be... AmazingNFT
.
IPFS id
Let's assume that our NFT will point to a simple image, which we have already uploaded to IPFS. It is available at: https://ipfs.io/ipfs/QmRhTTbUrPYEw3mJGGhQqQST9k86v1DPBiTTWJGKDJsVFw. As we can see the IPFS id of the image is QmRhTTbUrPYEw3mJGGhQqQST9k86v1DPBiTTWJGKDJsVFw
.
Now we can put together complete CIP-25 metadata JSON which we will use in the minting transaction:
{
"721": {
"6d5052088183db1ef06439a9f501b52721c2645532a50254a69d5390": {
"AmazingNFT": {
"name": "My amazing NFT",
"image": "ipfs://QmRhTTbUrPYEw3mJGGhQqQST9k86v1DPBiTTWJGKDJsVFw"
}
}
}
}
Minting transaction
We have already:
- verified that our wallet is equipped with policy key
- created CIP-25 metadata JSON.
We are now ready to mint!
We will construct transaction that mints 1 AmazingNFT
with policy id derived from simple native script template = cosigner#0
and post the related CIP-25 metadata to blockchain.
Note that the wallet expects asset name to be hex-encoded string so let's hex encode our AmazingNFT
first:
> echo -n "AmazingNFT" | xxd -p
416d617a696e674e4654
Let's now POST /wallets/{walletId}/transactions-construct
:
> curl -X POST http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/transactions-construct \
-d '{
"metadata":{
"721":{
"6d5052088183db1ef06439a9f501b52721c2645532a50254a69d5390":{
"AmazingNFT":{
"name":"My amazing NFT",
"image":"ipfs://QmRhTTbUrPYEw3mJGGhQqQST9k86v1DPBiTTWJGKDJsVFw"
}
}
}
},
"mint_burn":[
{
"operation":{
"mint":{
"quantity":1
}
},
"policy_script_template":"cosigner#0",
"asset_name":"416d617a696e674e4654"
}
]
}' \
-H "Content-Type: application/json"
That's it! I should now receive CBOR-encoded transaction
, fee
and coin_selection
details in response.
I can now sign and submit such a transaction just like in
how-to-make-a-transaction.
After the transaction has been submitted, your freshly-minted NFT should appear in your wallet balance!
Burning an NFT
Let's now burn our NFT. We have it already in our wallet balance and our wallet's policy key is "guarding" that asset so it shouldn't be any problem.
We can easily construct burning transaction with POST /wallets/{walletId}/transactions-construct
:
> curl -X POST http://localhost:8090/v2/wallets/73d38c71e4b8b5d71769622ab4f5bfdedbb7c39d/transactions-construct \
-d '{
"mint_burn":[
{
"operation":{
"burn":{
"quantity":1
}
},
"policy_script_template":"cosigner#0",
"asset_name":"416d617a696e674e4654"
}
]
}' \
-H "Content-Type: application/json"
That's it. I can now sign and submit such transaction just like in how-to-make-a-transaction and as a result my NFT will just disappear.
Sending assets in a transaction
Once you have some assets on your wallet balance you can send it in a transaction.
For example, I'd like to send 1.5₳ and 15 HappyCoins
to three different addresses in a single transaction.
First I need to construct it as follows:
> curl -X POST http://localhost:8090/v2/wallets/2269611a3c10b219b0d38d74b004c298b76d16a9/transactions-construct \
-d '{
"payments":[
{
"address":"addr_test1qqd2rhj0956q9xv8cevczvrvwg405agz7fzz0m2n87xhjvhxskq78v86w3zv9zc588rrp43sl2cusftxqkv3hzc0xs2sze9fu4",
"amount":{
"quantity":1500000,
"unit":"lovelace"
},
"assets":[
{
"policy_id":"919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149",
"asset_name":"4861707079436f696e",
"quantity":15
}
]
},
{
"address":"addr_test1qqf90safefvsafmacrtu899vwg6nde3m8afeadtk8cp7qw8xskq78v86w3zv9zc588rrp43sl2cusftxqkv3hzc0xs2sekar6k",
"amount":{
"quantity":1500000,
"unit":"lovelace"
},
"assets":[
{
"policy_id":"919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149",
"asset_name":"4861707079436f696e",
"quantity":15
}
]
},
{
"address":"addr_test1qpc038ku3u2js7hykte8xl7wctgl6avy20cp4k0z4ys2cy0xskq78v86w3zv9zc588rrp43sl2cusftxqkv3hzc0xs2sls284n",
"amount":{
"quantity":1500000,
"unit":"lovelace"
},
"assets":[
{
"policy_id":"919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149",
"asset_name":"4861707079436f696e",
"quantity":15
}
]
}
]
}' \
-H "Content-Type: application/json"
I should receive CBOR-encoded transaction
, fee
and coin_selection
details in response.
I can now sign and submit such a transaction just like in how-to-make-a-transaction.
How to delegate
Pre-requisites
- how to start a server
- how to create a wallet
- In order to be able to send transactions, our wallet must have funds. In case of
preview
andpreprod
testnets we can request tADA from the faucet.
Overview
Delegation is the process by which ada holders delegate the stake associated with their ada to a stake pool. Cardano wallet allows listing all stake pools that are operating on the blockchain and then "join" or "quit" them via special delegation transaction.
Delegation is supported only for Shelley wallets. Shared wallets will support it too in the near future.
Listing stake pools
Before joining any stake pool we can first list all available stake pools that are operating on our blockchain. Stake pools are ordered by non_myopic_member_rewards
which gives higher ranking and hence favors the pools potentially producing the best rewards in the future. The ordering could be influenced by the ?stake
query parameter which says how much stake we want to delegate.
> curl -X GET http://localhost:8090/v2/stake-pools?stake=1000000000
Joining stake pool
Once we select a stake pool we can "join" it. This operation will "virtually" add our wallet balance to the stake of this particular stake pool. Joining a pool for the first time will incur fee
and deposit
required for registering our stake key on the blockchain. This deposit will be returned to us if we quit stake pool all together.
The amount of the deposit is controlled by Cardano network parameter. It is currently 2₳ on mainnet
and testnet
. Deposit is taken only once, when joining stake pool for the first time. Rejoining another stake pool does not incur another deposit.
We can join stake pool using old transaction workflow:
Or new transaction workflow, where we can construct delegation transaction and then sign and submit it to the network:
- Construct:
POST /wallets/{walletId}/transactions-construct
- Sign:
POST /wallets/{walletId}/transactions-sign
- Submit:
POST /wallets/{walletId}/transactions-submit
Exemplary construct delegation request may look as follows:
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"delegations":
[{"join":{"pool":"pool1mgjlw24rg8sp4vrzctqxtf2nn29rjhtkq2kdzvf4tcjd5pl547k",
"stake_key_index":"0H"}}]}' \
-H "Content-Type: application/json"
Refer to how-to-make-a-transaction for details on signing and submitting it to the network.
Information about deposit_taken
and deposit_returned
is available in the wallet transaction history. Refer to how-to-make-a-transaction
Joining another stake pool
Joining another stake pool doesn't differ from the previous process at all. The only difference is that, behind the scenes, deposit
is not taken from the wallet, as it was already taken when joining for the first time.
Withdrawing rewards
Our wallet accumulates rewards from delegating to a stake pool on special rewards account associated with it. We can see how many rewards we have on the wallet balance at any time.
Just look up balance
while getting your wallet details:
> curl -X GET http://localhost:8090/v2/wallets/1ceb45b37a94c7022837b5ca14045f11a5927c65 | jq .balance
{
"total": {
"quantity": 14883796944,
"unit": "lovelace"
},
"available": {
"quantity": 14876107482,
"unit": "lovelace"
},
"reward": {
"quantity": 7689462, <----------- accumulated rewards
"unit": "lovelace"
}
}
We can withdraw those rewards when making any transaction. In the both, old and new transaction workflow, it is as simple as adding { "withdrawal": "self" }
to the transaction payload.
In particular in new transaction workflow we can just withdraw our rewards:
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"withdrawal":"self"}' \
-H "Content-Type: application/json"
or withdraw rewards while doing any other transaction:
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"payments":[{"address":"addr_test1qrtez7vn0d8xp495ggypmu2kyt7tt6qyva2spm0f5a3ewn0v474mcs4q8e9g55yknx3729kyg5dl69x5596ee9tvnynq7ffety","amount":{"quantity":1000000,"unit":"lovelace"}}],
"withdrawal":"self"}' \
-H "Content-Type: application/json"
As a result, after we sign and submit such transaction, the rewards will be added to our available balance (or just spend in case the amount of the transaction we're doing is bigger than our available balance and rewards are enough to compensate).
Information about withdrawals
is also available in the wallet transaction history. Refer to how-to-make-a-transaction
Quitting stake pool
At any point we can decide to quit delegation all together. Quitting stake pool will cause de-registration of the stake key associated to our wallet and our deposit will be returned. After that we're not longer going to receive rewards.
Quitting can be done in old transaction workflow:
or new transaction workflow, for instance:
Just quitting:
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"delegations":[{"quit":{"stake_key_index":"0H"}}]}' \
-H "Content-Type: application/json"
Quitting and withdrawing rewards in the same transaction. It should be noted that quitting can be realized only when all rewards are withdrawn:
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-construct \
-d '{"withdrawal":"self",
"delegations":[{"quit":{"stake_key_index":"0H"}}]}' \
-H "Content-Type: application/json"
After constructing a transaction that undelegates from a stake pool, I should receive a CBOR-encoded transaction
, fee
, and coin_selection
in the response.
I can now sign and submit such a transaction just like in
how-to-make-a-transaction.
All rewards will be withdrawn only if you add {"withdrawal":"self"} to the payload. You can achieve this with a single transaction, as shown above.
Handle Metadata
Pre-requisites
- how to start a server
- how to create a wallet
- In order to be able to send transactions, our wallet must have funds. In case of
preview
andpreprod
testnets we can request tADA from the faucet.
Transaction Metadata
Since the Shelley era, the Cardano blockchain allows user-defined data to be associated with transactions.
The metadata hash is part of the transaction body, so is covered by all transaction signatures.
The cardano-wallet API server uses a JSON representation of transaction metadata, isomorphic to the binary encoding on chain.
Please note that metadata provided in a transaction will be stored on the blockchain forever. Make sure not to include any sensitive data, in particular personally identifiable information (PII).
Metadata structure on chain
The top level is a map from metadata keys to metadata values.
Metadata keys are integers in the range 0 to 264 - 1.
Metadata values are one of three simple types or two compound types.
Simple types:
-
Integers in the range -(264 - 1) to 264 - 1
-
Strings (UTF-8 encoded)
-
Bytestrings
Compound types:
-
Lists of metadata values
-
Mappings from metadata values to metadata values
Note that lists and maps need not necessarily contain the same type of metadata value in each element.
Limits
- Strings may be at most 64 bytes long when UTF-8 encoded.
- Unencoded bytestrings may be at most 64 bytes long (i.e. at most 128 hex digits).
- There are no limits to the number of metadata values, apart from the protocol limit on transaction size.
The string length limitation is explained in the Delegation Design Spec, section E.3:
The size of strings in the structured value is limited to mitigate the problem of unpleasant or illegal content being posted to the blockchain. It does not prevent this problem entirely, but it means that it is not as simple as posting large binary blobs.
JSON representation in cardano-wallet
The top level is a JSON object mapping metadata keys as decimal number strings to JSON objects for metadata values.
Every metadata value is tagged with its type, using a JSON object, like this:
-
Integers
{ "int": NUMBER }
-
Strings
{ "string": STRING }
-
Bytestrings
{ "bytes": HEX-STRING }
The value must be base16-encoded (a hex string).
-
Lists of metadata values
{ "list": [ METADATA-VALUE, ... ] }
-
Mappings from metadata values to metadata values
{ "map": [{ "k": METADATA-VALUE, "v": METADATA-VALUE }, ... ] }
Examples
This is a transaction metadata which contains four values.
{
"64": {"string": "some text"},
"32": {"int": 42},
"16": {
"map": [
{
"k": {"string": "numbers"},
"v": {"list": [{"int": 1}, {"int": 2}, {"int": 4}, {"int": 8}]}
},
{
"k": {"string": "alphabet"},
"v": {
"map": [
{"k": {"string": "A"}, "v": {"int": 65}},
{"k": {"string": "B"}, "v": {"int": 66}},
{"k": {"string": "C"}, "v": {"int": 67}}
]
}
}
]
},
"8": { "bytes": "48656c6c6f2c2043617264616e6f21" }
}
Sample code: Converting from JavaScript objects
Use a function like this to translate arbitrary JavaScript values into metadata JSON format. If your application requires a more precise mapping, it can be modified to suit. Note that this code does not validate strings for length.
#!/usr/bin/env node
function txMetadataValueFromJS(jsVal) {
if (Array.isArray(jsVal)) {
// compound type - List
// (note that sparse arrays are not representable in JSON)
return { list: jsVal.map(txMetadataValueFromJS) };
} else if (typeof(jsVal) === 'object') {
if (jsVal !== null ) {
// compound type - Map with String keys
return { map: Object.keys(jsVal).map(key => {
return { k: { string: key }, v: txMetadataValueFromJS(jsVal[key]) };
}) };
} else {
// null: convert to simple type - String
return { string: 'null' };
}
} else if (Number.isInteger(jsVal)) {
// simple type - Int
return { int: jsVal };
} else if (typeof(jsVal) === 'string') {
// simple type - String
return { string: jsVal };
} else {
// anything else: convert to simple type - String.
// e.g. undefined, true, false, NaN, Infinity.
// Some of these can't be represented in JSON anyway.
// Floating point numbers: note there can be loss of precision when
// representing floats as decimal numbers
return { string: '' + jsVal };
}
}
// Get JSON objects from stdin, one per line.
const jsVals = require('fs')
.readFileSync(0, { encoding: 'utf8' })
.toString()
.split(/\r?\n/)
.filter(line => !!line)
.map(JSON.parse);
// Convert to transaction metadata JSON form
const txMetadataValues = jsVals.map(txMetadataValueFromJS);
const txMetadata = txMetadataValues
.reduce((ob, val, i) => { ob['' + i] = val; return ob; }, {});
// Print JSON to stdout
console.log(JSON.stringify(txMetadata));
CLI
Metadata can be provided when creating transactions through the cli.
The JSON is provided directly as a command-line argument.
- On Linux/MacOS JSON metadata can be put inside single quotes:
--metadata '{ "0":{ "string":"cardano" } }'
- On Windows it can be put in double quotes with double quotes inside JSON metadata escaped:
--metadata "{ \"0\":{ \"string\":\"cardano\" } }"
Usage: cardano-wallet transaction create [--port INT] WALLET_ID
--payment PAYMENT [--metadata JSON]
Create and submit a new transaction.
Available options:
-h,--help Show this help text
--port INT port used for serving the wallet API. (default: 8090)
--payment PAYMENT address to send to and amount to send separated by @,
e.g. '<amount>@<address>'
--metadata JSON Application-specific transaction metadata as a JSON
object. The value must match the schema defined in
the cardano-wallet OpenAPI specification.
References
For a detailed explanation of the metadata design, and information about the transaction format, consult the following specifications.
- Delegation Design Spec, Appendix E.
- Cardano Shelley Ledger Spec, Figure 10.
The full JSON schema is specified in the OpenAPI 3.0 swagger.yaml
.
- cardano-wallet OpenAPI Documentation, Shelley / Transactions / Create.
- cardano-wallet OpenAPI Specification, scroll to
TransactionMetadataValue
.
Shared wallets
Pre-requisites
- how to start a server
- In order to be able to send transactions, our wallet must have funds. In case of
preview
andpreprod
testnets we can request tADA from the faucet.
Overview
This guide shows you how to create a shared wallet and make a shared transaction, providing witnesses from all required co-signers referenced in the payment_script_template
and delegation_script_template
fields. In this example, we'll create two shared wallets using the same template for both payment and delegation operations. The template that we'll use will indicate that we require signatures from all co-owners in order to make a transaction. In our case, the wallet will have two co-owners cosigner#0
and cosigner#1
:
"template":
{ "all":
[ "cosigner#0", "cosigner#1"]
}
The script templates use Cardano simple scripts therefore one can build more sophisticated templates to guard their shared wallet spending or delegation operations. Below are some examples of possible templates. You can also explore POST /shared-wallets
swagger specification for more details.
Template examples
- required signature from any cosigner from the list
"template":
{ "any":
[ "cosigner#0", "cosigner#1", "cosigner#2"]
}
- required signature from at least 2 cosigners from the list
"template":
{ "some":
{ "at_least": 2,
"from": [ "cosigner#0", "cosigner#1", "cosigner#2"]
}
}
- required signature from at least 1 cosigners from the list but with additional nested constraints (at least
cosigner#0
or at leastcosigner#1
or at least bothcosigner#2
andcosigner#3
and any ofcosigner#4
andcosigner#5
)
"template":{
"some":{
"at_least":1,
"from":[
"cosigner#0",
"cosigner#1",
{
"all":[
"cosigner#2",
"cosigner#3",
{
"any":[
"cosigner#4",
"cosigner#5"
]
}
]
}
]
}
}
- required signature from any cosigner but with additional time locking constraints (either
cosigner#0
orcosigner#1
but only until slot 18447928 orcosigner#2
but only from slot 18447928)
"template":{
"any":[
"cosigner#0",
{
"all":[
"cosigner#1",
{
"active_until":18447928
}
]
},
{
"all":[
"cosigner#2",
{
"active_from":18447928
}
]
}
]
}
Creating wallets
First let's create two incomplete
shared wallets using the POST /shared-wallets
endpoint. The wallets are marked as "incomplete" because they do not yet have public keys from all cosigners.
Cosigner#0
wallet
> curl -X POST http://localhost:8090/v2/shared-wallets \
-d '{
"mnemonic_sentence":[
"beach",
"want",
"fly",
"guess",
"cabbage",
"hybrid",
"profit",
"leaf",
"term",
"air",
"join",
"feel",
"sting",
"nurse",
"anchor",
"come",
"entire",
"oil",
"kidney",
"situate",
"fun",
"deal",
"palm",
"chimney"
],
"passphrase":"Secure Passphrase",
"name":"`Cosigner#0` wallet",
"account_index":"0H",
"payment_script_template":{
"cosigners":{
"cosigner#0":"self"
},
"template":{
"all":[
"cosigner#0",
"cosigner#1"
]
}
},
"delegation_script_template":{
"cosigners":{
"cosigner#0":"self"
},
"template":{
"all":[
"cosigner#0",
"cosigner#1"
]
}
}
}' \
-H "Content-Type: application/json"
Cosigner#1
wallet
> curl -X POST http://localhost:8090/v2/shared-wallets \
-d '{
"mnemonic_sentence":[
"slight",
"tool",
"pear",
"write",
"body",
"fruit",
"crucial",
"tomorrow",
"hunt",
"alley",
"object",
"tool",
"voyage",
"loud",
"loop",
"client",
"access",
"vocal",
"unable",
"brand",
"patch",
"remain",
"object",
"boat"
],
"passphrase":"Secure Passphrase",
"name":"`Cosigner#1` wallet",
"account_index":"0H",
"payment_script_template":{
"cosigners":{
"cosigner#1":"self"
},
"template":{
"all":[
"cosigner#0",
"cosigner#1"
]
}
},
"delegation_script_template":{
"cosigners":{
"cosigner#1":"self"
},
"template":{
"all":[
"cosigner#0",
"cosigner#1"
]
}
}
}' \
-H "Content-Type: application/json"
Notice that the templates payment_script_template
and delegation_script_template
are partially the same for both wallets:
"template":{
"all":[
"cosigner#0",
"cosigner#1"
]
}
However the "cosigners":
part differs slightly. "Cosigner#0
wallet" has "cosigner#0":"self"
and "Cosigner#1
wallet" - "cosigner#1":"self"
. Each wallet has partial information about cosigners. "Cosigner#0
wallet" only knows about cosigner#0
and "Cosigner#1
wallet" only knows about cosigner#1
.
Now we can look up the wallets we've just created using the GET /shared-wallets/{walletId}
endpoint. From the response, we can tell that the wallet's status is "incomplete". For instance:
> curl -X GET http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8 | jq
{
"account_index": "0H",
"address_pool_gap": 20,
"delegation_script_template": {
"cosigners": {
"cosigner#1": "acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"
},
"template": {
"all": [
"cosigner#0",
"cosigner#1"
]
}
},
"id": "5e46668c320bb4568dd25551e0c33b0539668aa8",
"name": "`Cosigner#1` wallet",
"payment_script_template": {
"cosigners": {
"cosigner#1": "acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"
},
"template": {
"all": [
"cosigner#0",
"cosigner#1"
]
}
},
"state": {
"status": "incomplete"
}
}
Patching wallets
In order to be able to spend from the wallets we need to patch their payment and delegation templates with missing cosigners.
We already know that "Cosigner#0
wallet" is missing the cosigner#1
key and "Cosigner#1
wallet" is missing the cosigner#0
key.
First, let's get the cosigner#1
key from the "Cosigner#1
wallet". We can use the GET /shared-wallets/{walletId}/keys
endpoint for this:
> curl -X GET http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/keys?format=extended
"acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"
Now we can patch the payment and delegation templates of the "Cosigner#0
wallet" with the cosigner#1
key using the PATCH /shared-wallets/{walletId}/payment-script-template
and PATCH /shared-wallets/{walletId}/delegation-script-template
endpoints, respectively.
> curl -X PATCH http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/payment-script-template \
-d '{"cosigner#1":"acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"}' \
-H "Content-Type: application/json"
> curl -X PATCH http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/delegation-script-template \
-d '{"cosigner#1":"acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"}' \
-H "Content-Type: application/json"
We'll repeat the same action or "Cosigner#1
wallet", i.e. first get cosigner#0
key from "Cosigner#0
wallet" and then patch the payment and delegation templates of "Cosigner#1
wallet" with it.
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/keys?format=extended
"acct_shared_xvk1lk5eg5unj4fg48vamr9pc40euump5qs084a7cdgvs9u6yn82nh9lr3s62asfv45r4s0fqa239j3dc5e4f7z24heug43zpg6qdy5kawq63hhzl"
> curl -X PATCH http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/payment-script-template \
-d '{"cosigner#0":"acct_shared_xvk1lk5eg5unj4fg48vamr9pc40euump5qs084a7cdgvs9u6yn82nh9lr3s62asfv45r4s0fqa239j3dc5e4f7z24heug43zpg6qdy5kawq63hhzl"}' \
-H "Content-Type: application/json"
> curl -X PATCH http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/delegation-script-template \
-d '{"cosigner#0":"acct_shared_xvk1lk5eg5unj4fg48vamr9pc40euump5qs084a7cdgvs9u6yn82nh9lr3s62asfv45r4s0fqa239j3dc5e4f7z24heug43zpg6qdy5kawq63hhzl"}' \
-H "Content-Type: application/json"
Now, if we look up the wallet again with GET /shared-wallets/{walletId}
, we should notice that the wallet has started restoring, and after a while it is ready to use:
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f | jq .state
{
"status": "ready"
}
Furthermore, the wallets now have complete information about cosigners for delegation and payment templates:
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f | jq .payment_script_template
{
"cosigners": {
"cosigner#0": "acct_shared_xvk1lk5eg5unj4fg48vamr9pc40euump5qs084a7cdgvs9u6yn82nh9lr3s62asfv45r4s0fqa239j3dc5e4f7z24heug43zpg6qdy5kawq63hhzl",
"cosigner#1": "acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"
},
"template": {
"all": [
"cosigner#0",
"cosigner#1"
]
}
}
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f | jq .delegation_script_template
{
"cosigners": {
"cosigner#0": "acct_shared_xvk1lk5eg5unj4fg48vamr9pc40euump5qs084a7cdgvs9u6yn82nh9lr3s62asfv45r4s0fqa239j3dc5e4f7z24heug43zpg6qdy5kawq63hhzl",
"cosigner#1": "acct_shared_xvk1mqrjad6aklkpwvhhktrzeef5yunla9pjs0jt9csp3gjcynxvumfjpk99hqxkyknn30ya6l5yjgeegs5ltmmsy70gm500sacvllvwt6qjztknp"
},
"template": {
"all": [
"cosigner#0",
"cosigner#1"
]
}
}
Spending transactions
Our shared wallets are now fully-operational. In particular, we can access the addresses of the wallets via the GET /shared-wallets/{walletId}/addresses
endpoint. We can get the address and fund it from the faucet so that we can spend from our wallet later on.
After we have funded the "Cosigner#0
wallet", we can see that the balance has changed on the "Cosigner#1
wallet" as well. Both wallets have the same balance:
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f | jq .balance
{
"available": {
"quantity": 10000000000,
"unit": "lovelace"
},
"reward": {
"quantity": 0,
"unit": "lovelace"
},
"total": {
"quantity": 10000000000,
"unit": "lovelace"
}
}
> curl -X GET http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8 | jq .balance
{
"available": {
"quantity": 10000000000,
"unit": "lovelace"
},
"reward": {
"quantity": 0,
"unit": "lovelace"
},
"total": {
"quantity": 10000000000,
"unit": "lovelace"
}
}
Of course, this is expected. Both co-owners have full knowledge of the balance, however they cannot spend from the balance on their own. As required by the payment template, the wallet needs signatures from both co-owners before funds can be spent.
Let's make a simple transaction. The owner of "Cosigner#0
wallet" will construct and sign a transaction on their end, and then provide a CBOR blob of this transaction to the owner of "Cosigner#1
wallet". Then Cosigner#1
can sign it on their own, and submit it to the network.
We will be using the following shared wallet transaction endpoints:
POST /shared-wallets/{walletId}/transactions-construct
POST /shared-wallets/{walletId}/transactions-sign
POST /shared-wallets/{walletId}/transactions-submit
At any point during the process of constructing and signing a transaction, both "Cosigner#0
" and "Cosigner#1
" may decode the transaction from its CBOR representation using the POST /shared-wallets/{walletId}/transactions-decode
endpoint. Decoding the transaction makes it possible to see all details associated with the transaction, such as its inputs
, outputs
, fees
, deposits
, metadata
, or witness_count
.
Cosigner#0
Constructing
"Cosigner#0
" constructs a transaction sending 10₳ to an external address using the POST /shared-wallets/{walletId}/transactions-construct
endpoint. In response, they receive a CBOR representation of the unsigned transaction.
> curl -X POST http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/transactions-construct \
-d '{
"payments":[
{
"address":"addr_test1qq86pgrf7yyzp3gysxgqwt4ahegzslygvzh77eq2qwg66pedkztnw78s6gkt3eux35sllasu0x6grejewlrzaus8kekq7cp9ck",
"amount":{
"quantity":10000000,
"unit":"lovelace"
}
}
]
}' \
-H "Content-Type: application/json" | jq .transaction
"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHtNCAChAYGCAYKCAFgciTss5ZpPDifzTTlpVVbNSaZGERzre1z05XYVWYIAWBzHmpkAwWCRnUTfcTy0aZK2LLLM36Y/4gZMlJhm9fY="
Signing
"Cosigner#0
" signs the transaction with the POST /shared-wallets/{walletId}/transactions-sign
endpoint:
> curl -X POST http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/transactions-sign \
-d '{
"passphrase":"Secure Passphrase",
"transaction":"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHzCCAChAYGCAYKCAFgciTss5ZpPDifzTTlpVVbNSaZGERzre1z05XYVWYIAWBzHmpkAwWCRnUTfcTy0aZK2LLLM36Y/4gZMlJhm9fY="
}' \
-H "Content-Type: application/json" | jq .transaction
"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHzCCACiAIGCWCBOK9nJ9IxJt2gddyZ2fUHC4nre84+EbPQdL60OP0m4ZlhA11gVIBWSlDZl8NQyzV4v9U8AuX8n2UqJK9+Wt1sqnM7jAeYtbuyAN5weTaVV+NDVlpKVg3piowyC1eqZBeWWAQGBggGCggBYHIk7LOWaTw4n8005aVVWzUmmRhEc63tc9OV2FVmCAFgcx5qZAMFgkZ1E33E8tGmStiyyzN+mP+IGTJSYZvX2"
and in response, receives a CBOR representation of the partially-signed transaction.
Cosigner#1
Now "Cosigner#0
" can hand over the CBOR of the partially-signed transaction to "Cosigner#1
", who can then sign it on their own.
Signing
"Cosigner#1
" signs the transaction with POST /shared-wallets/{walletId}/transactions-sign
and gets a CBOR representation of the fully-signed transaction in response.
> curl -X POST http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/transactions-sign \
-d '{
"passphrase":"Secure Passphrase",
"transaction":"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHzCCACiAIGCWCBOK9nJ9IxJt2gddyZ2fUHC4nre84+EbPQdL60OP0m4ZlhA11gVIBWSlDZl8NQyzV4v9U8AuX8n2UqJK9+Wt1sqnM7jAeYtbuyAN5weTaVV+NDVlpKVg3piowyC1eqZBeWWAQGBggGCggBYHIk7LOWaTw4n8005aVVWzUmmRhEc63tc9OV2FVmCAFgcx5qZAMFgkZ1E33E8tGmStiyyzN+mP+IGTJSYZvX2"
}' \
-H "Content-Type: application/json" | jq .transaction
"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHzCCACiAIKCWCBOK9nJ9IxJt2gddyZ2fUHC4nre84+EbPQdL60OP0m4ZlhA11gVIBWSlDZl8NQyzV4v9U8AuX8n2UqJK9+Wt1sqnM7jAeYtbuyAN5weTaVV+NDVlpKVg3piowyC1eqZBeWWAYJYIKRn9pUgwcx7GBoIBVJd+9Gh8I/YOwFFK7KyXUXSfrWnWEAqOkD9ljkgYwqwPpuxV+4iJw0hf4PPXA7o9XVxZ4wqT6vSnu1hvsyM4ezuCP/ahTMQ7RzZHTLa1BDx6YawGHQHAYGCAYKCAFgciTss5ZpPDifzTTlpVVbNSaZGERzre1z05XYVWYIAWBzHmpkAwWCRnUTfcTy0aZK2LLLM36Y/4gZMlJhm9fY="
Submission
The transaction is now fully-signed by both co-owners: "Cosigner#0
" and "Cosigner#1
". At this point, either of them can submit it to the network using their respective wallets via the POST /shared-wallets/{walletId}/transactions-submit
endpoint.
> curl -X POST http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/transactions-submit \
-d '{
"transaction":"hKUAgYJYIP8Fwso5i5ovF8bJJuqZ4xOdvXC3mXJCN2O2vDIxQQOYAAGCogBYOQAPoKBp8QggxQSBkAcuvb5QKHyIYK/vZAoDka0HLbCXN3jw0iy454aNIf/2HHm0geZZd8Yu8ge2bAEaAJiWgKIAWDkwZ/CRYzhreLBgWdOZFReuuejo5RIhvNwLOk51lWQGg+0Ii6yF8VB/6jOUJdwEhwqE3Od9sHl14eQBGwAAAAJTcJsvAhoAArJRAxoBDHzCCACiAIKCWCBOK9nJ9IxJt2gddyZ2fUHC4nre84+EbPQdL60OP0m4ZlhA11gVIBWSlDZl8NQyzV4v9U8AuX8n2UqJK9+Wt1sqnM7jAeYtbuyAN5weTaVV+NDVlpKVg3piowyC1eqZBeWWAYJYIKRn9pUgwcx7GBoIBVJd+9Gh8I/YOwFFK7KyXUXSfrWnWEAqOkD9ljkgYwqwPpuxV+4iJw0hf4PPXA7o9XVxZ4wqT6vSnu1hvsyM4ezuCP/ahTMQ7RzZHTLa1BDx6YawGHQHAYGCAYKCAFgciTss5ZpPDifzTTlpVVbNSaZGERzre1z05XYVWYIAWBzHmpkAwWCRnUTfcTy0aZK2LLLM36Y/4gZMlJhm9fY="
}' \
-H "Content-Type: application/json" | jq
{
"id": "d443719bbd3e4301aa34791823b2b7821757e843509d29918006e5ca26ca368c"
}
The transaction has been submitted successfully. The POST /shared-wallets/{walletId}/transactions-submit
endpoint provides a transaction identifier in its response.
We can use this transaction identifier to look up the transaction from either shared wallet using the GET /shared-wallets/{walletId}/transactions
or GET /shared-wallets/{walletId}/transactions/{transactionId}
endpoints.
"Cosigner#0
wallet":
> curl -X GET http://localhost:8090/v2/shared-wallets/2a0ebd0cceab2161765badf2e389b26e0961de2f/transactions/d443719bbd3e4301aa34791823b2b7821757e843509d29918006e5ca26ca368c | jq
{
"amount": {
"quantity": 10176721,
"unit": "lovelace"
},
...
"direction": "outgoing",
...
"fee": {
"quantity": 176721,
"unit": "lovelace"
},
...
"Cosigner#1
wallet":
> curl -X GET http://localhost:8090/v2/shared-wallets/5e46668c320bb4568dd25551e0c33b0539668aa8/transactions/d443719bbd3e4301aa34791823b2b7821757e843509d29918006e5ca26ca368c | jq
{
"amount": {
"quantity": 10176721,
"unit": "lovelace"
},
...
"direction": "outgoing",
...
"fee": {
"quantity": 176721,
"unit": "lovelace"
},
...
Delegation
Delegation of shared wallets will be supported soon.
Installation
There are a number of ways to obtain cardano-wallet.
Daedalus installer
The cardano-wallet
backend is included in the Daedalus installation. This is convenient if you already have Daedalus installed, but the version of cardano-wallet
may not be the latest.
Pre-built binaries from GitHub release page
Release builds of cardano-wallet
for Linux, macOS, and Windows are available at:
https://github.com/cardano-foundation/cardano-wallet/releases
These release bundles include the recommended version of cardano-node
, according to the release matrix.
Direct download URLS
The release packages can be downloaded directly from the github servers using a command-line tool like wget
or cURL
. For example, one can download and unpack a pre-packaged linux binary for cardano-wallet@v2020-04-07
with:
> curl -L https://github.com/cardano-foundation/cardano-wallet/releases/download/v2020-04-07/cardano-wallet-v2020-04-07-linux64.tar.gz | tar xvz
Docker
See Docker.
Nix/NixOS
See NixOS.
Compile from source
See Building.
If you feel brave enough and want to compile everything from sources, please refer to
Building
, or the equivalent documentation in each source repository (instructions often appear in README.md
).
As a pre-requisite, you may want to install and configure Nix or cabal, depending on your weapon of choice.
Summary
Repository | Releases | Linux | MacOS | Windows |
---|---|---|---|---|
cardano-node | releases | ✔️ | ✔️ | ✔️ |
cardano-db-sync | releases | ✔️ | ✔️ | ❌ |
cardano-submit-api | releases | ✔️ | ✔️ | ❌ |
cardano-graphql | releases | ✔️ | ✔️ | ❌ |
cardano-rosetta | releases | ✔️ | ✔️ | ❌ |
cardano-wallet | releases | ✔️ | ✔️ | ✔️ |
Running with Docker
Docker images are continuously built and deployed on dockerhub under specific tags. Using docker provides the fastest and easiest user experience for setting up the Cardano stack. You should prefer this solution over building from sources unless you have really good reasons not to. The following images are available for each component of the Adrestia architecture:
Repository | Tags | Documentation |
---|---|---|
inputoutput/cardano-node | master , MAJ.MIN.PATCH , latest | link |
[cardanofoundation/cardano-wallet][cardanofoundation-cardano-wallet] | byron , YYYY.MM.DD-byron , latest | Docker |
Tag Naming Scheme
Tag | Contents |
---|---|
latest | Points to the latest stable image for the corresponding component. This is also the tag to which docker defaults when pulling without an explicit tag. These typically points to latest known release which happens at the end of an iteration cycle. Depending on which project / component, the iteration cycle may vary from 1 to 2 weeks. |
MAJ.MIN.PATCH or YYYY.MM.DD | Must match actual releases of the corresponding component. Refer to each component release notes to know which release tags are available. |
master | Points to the very tip of the development branch. This is therefore not recommended for production but can be useful to try out features before they are officially released. |
Examples
For example, in order to use cardano-node@1.10.0
, one can simply run:
> docker pull inputoutput/cardano-node:1.10.0
Similarly, one can pull cardano-wallet@v2021-08-11
with:
> docker pull cardanofoundation/cardano-wallet:2021.8.11
About version compatibility
For version compatibility between components, please refer to compatibility matrix on each component main page (e.g. cardano-wallet).
Downloading the Docker image
To get the latest release of cardano-wallet
, run:
docker pull cardanofoundation/cardano-wallet:latest
Running the Docker container for cardano-wallet
To run basic CLI commands, use:
> docker run --rm cardanofoundation/cardano-wallet:latest --help
See cli for full documentation of the CLI.
Building the Docker image locally
Ensure that you have Nix installed and the IOHK binary cache enabled (instructions).
Then run this command from the cardano-wallet
git repo:
> docker load -i $(nix build --json .#dockerImage | jq -r '.[0].outputs.out')
If you have no local changes, the build should be downloaded from the IOHK binary cache then loaded into your local Docker image storage.
Inspecting the contents of the Docker image
The default entrypoint of the image is
/bin/start-cardano-wallet-shelley
. If you need to run a shell
inside the Docker image, use the bash shell as the entrypoint:
> docker run --rm -it --entrypoint bash cardanofoundation/cardano-wallet:latest
Docker compose
One can also use docker-compose to quickly spin up cardano-wallet
together with supported block producer. Those are useful for a quick start or as a baseline for development.
cardano-wallet/docker-compose.yml is an example docker-compose.yaml
combining the latest versions of cardano-wallet
and cardano-node
.
To give it a spin, simply launch:
wget https://raw.githubusercontent.com/cardano-foundation/cardano-wallet/master/docker-compose.yml
NETWORK=mainnet docker-compose up
There is also an example configuration for cardano-graphql.
Running cardano-wallet with NixOS
Running without installation
The following command will download and run a given release of cardano-wallet
using Nix:
> nix run github:cardano-foundation/cardano-wallet/v2022-01-18 -- version
v2022-01-18 (git revision: ce772ff33623e2a522dcdc15b1d360815ac1336a)
It's also possible to run the very latest version from the master branch on GitHub:
> nix run github:cardano-foundation/cardano-wallet -- --help
...
Wrapper script with pre-configured network settings
To run a wallet on mainnet:
> CARDANO_NODE_SOCKET_PATH=../cardano-node/node.socket
> nix run github:cardano-foundation/cardano-wallet#mainnet/wallet
Installing into user profile
> nix profile install github:cardano-foundation/cardano-wallet/v2022-01-18
> cardano-wallet version
v2022-01-18 (git revision: ce772ff33623e2a522dcdc15b1d360815ac1336a)
NixOS Module
A nixos service definition for cardano-wallet server is available by importing the flake nixosModule
attribute into your nixos configuration. Or by importing nix/nixos
.
Then cardano-wallet
server can then be activated and configured:
{
description = "Flake example with cardano-wallet NixOS module";
inputs.cardano-wallet.url = github:cardano-foundation/cardano-wallet;
outputs = { self, cardano-wallet }@inputs: {
nixosModules.example = { config, ...}: {
imports = [
inputs.cardano-wallet.nixosModule
];
services.config.cardano-wallet = {
enable = true;
walletMode = "mainnet";
nodeSocket = config.services.cardano-node.socketPath;
poolMetadataFetching = {
enable = true;
smashUrl = "https://smash.cardano-mainnet.iohk.io";
};
tokenMetadataServer = "https://tokens.cardano.org";
};
};
};
}
See nix/nixos/cardano-wallet-service.nix
for the other configuration options (such as the genesisFile
option to run cardano-wallet on a testnet) and complete documentation.
Command-Line Interface
The CLI is a proxy to the wallet server, which is required for most commands. Commands are turned into corresponding API calls, and submitted to an up-and-running server. Some commands do not require an active server and can be run "offline". (e.g. recovery-phrase generate
)
The wallet command-line interface (abbrev. CLI) is a tool that provides a convenient way of using the cardano-wallet HTTP Application Programming Interface (abbrev. API). The wallet API is an HTTP service used to manage wallets, make payments, and update wallet data such as addresses and keys, for instance. The wallet CLI can start the wallet server, or run a number of commands on a running server, and supports most functions of the API itself.
The intended audience of the CLI are users who run a wallet API server outside of Daedalus and work with the cardano-wallet API directly - these include application developers, stake pool operators, and exchanges.
CLI commands allow you to make and track changes while maintaining the wallet API. The wallet CLI converts commands into corresponding API calls, submits them to a running server and display the server's responses in a readable way into the terminal.
Pre-Requisites
How to Run
You can explore the wallet CLI with:
> cardano-wallet --help
Then, you can use a list of commands for various purposes, such as:
- list, create, update, or delete wallets
- create, submit, forget, or track fees in regards to transactions
- list, create, and import wallet addresses
- view network information and parameters
- manage private and public keys
Each sub-command will also provide some additional help when passed --help
. For example:
> cardano-wallet transaction --help
Commands
> cardano-wallet --help
The CLI commands for wallet
, transaction
and address
only output valid JSON on stdout
. So you may redirect the output to a file with >
or pipe it into utility software like jq
!
serve
Serve API that listens for commands/actions. Before launching user should start cardano-node
.
Usage: cardano-wallet serve [--listen-address HOST] --node-socket FILE
[--sync-tolerance DURATION]
[--random-port | --port INT]
[--ui-random-port | --ui-port INT]
[--ui-deposit-random-port | --ui-deposit-port INT]
[--tls-ca-cert FILE --tls-sv-cert FILE
--tls-sv-key FILE] (--mainnet | --testnet FILE)
[--database DIR] [--shutdown-handler]
[--pool-metadata-fetching ( none | direct | SMASH-URL )]
[--token-metadata-server URL]
[--trace-NAME SEVERITY]
Serve API that listens for commands/actions.
Available options:
-h,--help Show this help text
--help-tracing Show help for tracing options
--listen-address HOST Specification of which host to bind the API server
to. Can be an IPv[46] address, hostname, or '*'.
(default: 127.0.0.1)
--node-socket FILE Path to the node's domain socket file (POSIX) or pipe
name (Windows). Note: Maximum length for POSIX socket
files is approx. 100 bytes. Note: Windows named pipes
are of the form \\.\pipe\cardano-node
--sync-tolerance DURATION
time duration within which we consider being synced
with the network. Expressed in seconds with a
trailing 's'. (default: 300s)
--random-port serve wallet API on any available port (conflicts
with --port)
--port INT port used for serving the wallet API. (default: 8090)
--ui-random-port serve the personal wallet UI on any available port
(conflicts with --ui-port)
--ui-port INT port used for serving the personal wallet UI.
--ui-deposit-random-port serve the deposit wallet UI on any available port
(conflicts with --ui-deposit-port)
--ui-deposit-port INT port used for serving the deposit wallet UI.
--tls-ca-cert FILE A x.509 Certificate Authority (CA) certificate.
--tls-sv-cert FILE A x.509 Server (SV) certificate.
--tls-sv-key FILE The RSA Server key which signed the x.509 server
certificate.
--mainnet Use Cardano mainnet protocol
--testnet FILE Path to the byron genesis data in JSON format.
--database DIR use this directory for storing wallets. Run in-memory
otherwise.
--shutdown-handler Enable the clean shutdown handler (exits when stdin
is closed)
--pool-metadata-fetching ( none | direct | SMASH-URL )
Sets the stake pool metadata fetching strategy.
Provide a URL to specify a SMASH metadata proxy
server, use "direct" to fetch directly from the
registered pool URLs, or "none" to completely disable
stake pool metadata. The initial setting is "none"
and changes by either this option or the API will
persist across restarts.
--token-metadata-server URL
Sets the URL of the token metadata server. If unset,
metadata will not be fetched. By using this option,
you are fully trusting the operator of the metadata
server to provide authentic token metadata.
--log-level SEVERITY Global minimum severity for a message to be logged.
Individual tracers severities still need to be
configured independently. Defaults to "DEBUG".
--trace-NAME SEVERITY Individual component severity for 'NAME'. See
--help-tracing for details and available tracers.
Minimal Arguments
More information on starting the wallet server can be found in Start a server.
We also recommend to pass a --database
option pointing to a directory on the file-system; without this option, the wallet will maintain a state in-memory which will vanish once stopped.
Runtime flags
By default, the wallet runs on a single core which is sufficient for most 'normal users'. Application running larger wallets like exchanges should configure the server to use multiple cores for some database blocking operations may have a visible negative effect on the overall server behavior. This can be achieved by providing specific runtime flags to the serve command delimited by +RTS <flags> -RTS
. To configure the how much cores are available to the server, use the -N
flag. For example, to configure 2 cores do:
> cardano-wallet serve ... +RTS -N2 -RTS
Using +RTS -N4 -RTS
will tell the server to use 4 cores. Note that there's little performance benefits between 2 and 4 cores for server running a single wallet, but there are visible performance improvements from 1 to 2.
Domain socket/named pipe
On POSIX systems (i.e. Linux and macOS), a UNIX domain socket is used for communication between the cardano-wallet and cardano-node processes.
On these systems, the --node-socket
argument must be a path to a socket file. Note that there is a limitation on socket path lengths of about 100 characters or so.
On Windows systems, a Named Pipe is used instead.
Windows Named Pipes do not have filenames. So on Windows systems, the --node-socket
argument must be a pipe name. Pipe names are a string of the form \\.\pipe\name
. For example, \\.\pipe\cardano-wallet
.
Examples
Mainnet
> cardano-wallet serve \
--mainnet \
--node-socket CARDANO_NODE_SOCKET_PATH_OR_PIPE \
--database ./wallets-mainnet
Testnet
Note that for testnets, a byron genesis file is required (see pre-requisites), even though the network is in the shelley era. This is because the chain is synced from the beginning of the first era.
> cardano-wallet serve \
--testnet testnet-byron-genesis.json \
--node-socket CARDANO_NODE_SOCKET_PATH_OR_PIPE \
--database ./wallets-testnet
Metadata
For the wallet to show stake pool metadata, you need to set --pool-metadata-fetching ( none | direct | SMASH-URL )
. And for the wallet to show token metadata, you need to set --token-metadata-server URL
.
Logging options for serve
serve
accepts extra command-line arguments for logging (also called "tracing"). Use --help-tracing
to show the
options, the tracer names, and the possible log levels.
> cardano-wallet serve --help-tracing
Additional tracing options:
--log-level SEVERITY Global minimum severity for a message to be logged.
Defaults to "DEBUG" unless otherwise configured.
--trace-NAME=off Disable logging on the given tracer.
--trace-NAME=SEVERITY Set the minimum logging severity for the given
tracer. Defaults to "INFO".
The possible log levels (lowest to highest) are:
debug info notice warning error critical alert emergency
The possible tracers are:
application About start-up logic and the server's surroundings.
api-server About the HTTP API requests and responses.
wallet-engine About background wallet workers events and core wallet engine.
wallet-db About database operations of each wallet.
pools-engine About the background worker monitoring stake pools and stake pools engine.
pools-db About database operations on stake pools.
ntp-client About ntp-client.
network About network communication with the node.
example
Use these options to enable debug-level tracing for certain components of the wallet. For example, to log all database queries for the wallet databases, use:
> cardano-wallet serve --trace-wallet-db=debug ...
recovery-phrase generate
> cardano-wallet recovery-phrase generate [--size=INT]
Generates an English recovery phrase.
> cardano-wallet recovery-phrase generate
These words will be used to create a wallet later. You may also ask for a specific number of words using the --size
option:
> cardano-wallet recovery-phrase generate --size 21
wallet list
> cardano-wallet wallet list [--port=INT]
Lists all your wallets:
> cardano-wallet wallet list
wallet create from recovery-phrase
> cardano-wallet wallet create from-recovery-phrase from-genesis [--port INT] WALLET_NAME [--address-pool-gap INT]
Create a new wallet using a sequential address scheme. This is an interactive command that will prompt you for recovery-phrase words and password.
> cardano-wallet wallet create from-recovery-phrase from-genesis "My Wallet"
Please enter a 15–24 word recovery-phrase sentence: <enter generated recovery-phrase here>
(Enter a blank line if you do not wish to use a second factor.)
Please enter a 9–12 word recovery-phrase second factor: <skip or enter new recovery-phrase here>
Please enter a passphrase: ****************
Enter the passphrase a second time: ****************
after this your new wallet will be created.
wallet create from recovery-phrase from-checkpoint
> cardano-wallet wallet create from-recovery-phrase from-checkpoint [--port INT] WALLET_NAME [--address-pool-gap INT] --block-header-hash BLOCKHEADERHASH --slot-no SLOTNO
This command will create a wallet whose synchronization starts from the block specified by BLOCKHEADERHASH
and SLOTNO
. This will create a partial view of the wallet if the wallet has transactions before the checkpoint!
wallet create from recovery-phrase from-tip
> cardano-wallet wallet create from-recovery-phrase from-tip [--port INT] WALLET_NAME [--address-pool-gap INT]
This command will create a wallet whose synchronization starts from the current node tip. Useful for creating an empty wallet for testing.
wallet get
> cardano-wallet wallet get [--port=INT] WALLET_ID
Fetches the wallet with specified wallet id:
> cardano-wallet wallet get 2512a00e9653fe49a44a5886202e24d77eeb998f
wallet utxo
> cardano-wallet wallet utxo [--port=INT] WALLET_ID
Visualize a wallet's UTxO distribution in the form of an histrogram with a logarithmic scale. The distribution's data is returned by the API in a JSON format, e.g.:
{
"distribution": {
"10": 1,
"100": 0,
"1000": 8,
"10000": 14,
"100000": 32,
"1000000": 3,
"10000000": 0,
"100000000": 12,
"1000000000": 0,
"10000000000": 0,
"100000000000": 0,
"1000000000000": 0,
"10000000000000": 0,
"100000000000000": 0,
"1000000000000000": 0,
"10000000000000000": 0,
"45000000000000000": 0
}
}
which could be plotted as:
│
100 ─
│
│ ┌───┐
10 ─ ┌───┐ │ │ ┌───┐
│ ┌───┐ │ │ │ │ │ │
│ │ │ │ │ │ │ ┌───┐ │ │
1 ─ ┌───┐ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ ╷ │ │ ╷ │ │ ╷ ╷ │ │ ╷
└─┘ └─│───────│─┘ └─│─┘ └─│─┘ └─│─┘ └─│───────│─┘ └──────│────────────
10μ₳ 100μ₳ 1000μ₳ 0.1₳ 1₳ 10₳ 100₳ 1000₳
wallet utxo-snapshot
> cardano-wallet wallet utxo-snapshot [--port INT] WALLET_ID
Gets a snapshot of the wallet's entire UTxO set, in JSON format.
Each entry in the list contains the following fields:
Field | Description |
---|---|
ada | the actual ada quantity of this UTxO entry |
ada_minimum | the minimum ada quantity permitted by the protocol |
assets | quantities of all other assets included in this UTxO entry |
{
"entries": [
{
"ada_minimum": {
"quantity": 1666665,
"unit": "lovelace"
},
"ada": {
"quantity": 15582575,
"unit": "lovelace"
},
"assets": [
{
"asset_name": "",
"quantity": 1503,
"policy_id": "789ef8ae89617f34c07f7f6a12e4d65146f958c0bc15a97b4ff169f1"
},
{
"asset_name": "4861707079436f696e",
"quantity": 4958,
"policy_id": "919e8a1922aaa764b1d66407c6f62244e77081215f385b60a6209149"
}
]
},
...
]
}
⚠ This endpoint was intended to be used for debugging purposes. The output format is subject to change at any time.
wallet update name
> cardano-wallet wallet update name [--port=INT] WALLET_ID STRING
Updates name of a wallet given wallet id:
> cardano-wallet wallet update name 2512a00e9653fe49a44a5886202e24d77eeb998f NewName
wallet update passphrase
> cardano-wallet wallet update passphrase [--port=INT] WALLET_ID
Interactive prompt to update the wallet master's passphrase (old passphrase required).
> cardano-wallet wallet update passphrase 2512a00e9653fe49a44a5886202e24d77eeb998f
Please enter your current passphrase: **********
Please enter a new passphrase: **********
Enter the passphrase a second time: **********
wallet delete
> cardano-wallet wallet delete [--port=INT] WALLET_ID
Deletes wallet with specified wallet id:
> cardano-wallet wallet delete 2512a00e9653fe49a44a5886202e24d77eeb998f
transaction create
> cardano-wallet transaction create [--port=INT] WALLET_ID [--metadata=JSON] [--ttl=SECONDS] --payment=PAYMENT...
Creates and submits a new transaction:
> cardano-wallet transaction create 2512a00e9653fe49a44a5886202e24d77eeb998f \
--payment 22@Ae2tdPwUPEZ...nRtbfw6EHRv1D \
--payment 5@Ae2tdPwUPEZ7...pVwEPhKwseVvf \
--metadata '{ "0":{ "string":"cardano" } }'
This creates a transaction that sends 22 lovelace to Ae2tdPwUPEZ...nRtbfw6EHRv1D
and 5 lovelace to Ae2tdPwUPEZ7...pVwEPhKwseVvf
from wallet with id 2512a00e9653fe49a44a5886202e24d77eeb998f.
For more information about the --metadata
option, see TxMetadata.
transaction fees
> cardano-wallet transaction fees [--port=INT] WALLET_ID [--metadata=JSON] [--ttl=SECONDS] --payment=PAYMENT...
Estimates fee for a given transaction:
> cardano-wallet transaction fees 2512a00e9653fe49a44a5886202e24d77eeb998f \
--payment 22@Ae2tdPwUPEZ...nRtbfw6EHRv1D \
--payment 5@Ae2tdPwUPEZ7...pVwEPhKwseVvf \
--metadata '{ "0":{ "string":"cardano" } }'
This estimates fees for a transaction that sends 22 lovelace to Ae2tdPwUPEZ...nRtbfw6EHRv1D
and 5 lovelace to Ae2tdPwUPEZ7...pVwEPhKwseVvf
from wallet with id 2512a00e9653fe49a44a5886202e24d77eeb998f.
transaction list
> cardano-wallet transaction list [--port INT] WALLET_ID [--start TIME] [--end TIME] [--order ORDER] [--simple-metadata] [--max_count MAX_COUNT]
List the transactions associated with a wallet.
> cardano-wallet transaction list 2512a00e9653fe49a44a5886202e24d77eeb998f \
--start 2018-09-25T10:15:00Z \
--end 2019-11-21T10:15:00Z \
--order ascending \
--max_count 10
This lists max 10 transactions between 2018-09-25T10:15:00Z
and 2019-11-21T10:15:00Z
in ascending
order.
transaction submit
> cardano-wallet transaction submit [--port INT] BINARY_BLOB
Submit transaction prepared and signed outside of the wallet:
> cardano-wallet transaction submit 00bf02010200000...d21942304
Sends transaction identified by a hex-encoded BINARY_BLOB of externally-signed transaction.
transaction forget
> cardano-wallet transaction forget [--port INT] WALLET_ID TRANSACTION_ID
Forget pending transaction for a given wallet:
> cardano-wallet transaction forget 2512a00e9653fe49a44a5886202e24d77eeb998f 3e6ec12da4414aa0781ff8afa9717ae53ee8cb4aa55d622f65bc62619a4f7b12
transaction get
> cardano-wallet transaction get [--port INT] WALLET_ID TRANSACTION_ID
Get a transaction with the specified id:
> cardano-wallet transaction get 2512a00e9653fe49a44a5886202e24d77eeb998f 3e6ec12da4414aa0781ff8afa9717ae53ee8cb4aa55d622f65bc62619a4f7b12
address list
> cardano-wallet address list [--port=INT] WALLET_ID [--state=STRING]
List all known (used or not) addresses and their corresponding status.
> cardano-wallet list addresses 2512a00e9653fe49a44a5886202e24d77eeb998f
address create
> cardano-wallet address create [--port INT] [--address-index INDEX] WALLET_ID
Create new address for random wallet.
> cardano-wallet address create 03f4c150aa4626e28d02be95f31d3c79df344877
Please enter your passphrase: *****************
Ok.
{
"state": "unused",
"id": "2w1sdSJu3GVgr1ic6aP3CEwZo9GAhLzigdBvCGY4JzEDRbWV4HUNpZdHf2n5fV41dGjPpisDX77BztujAJ1Xs38zS8aXvN7Qxoe"
}
address import
> cardano-wallet address import [--port INT] WALLET_ID ADDRESS
Import address belonging to random wallet.
network information
> cardano-wallet network information [--port=INT]
View network information and syncing progress between the node and the blockchain.
> cardano-wallet network information
network parameters
> cardano-wallet network parameters [--port=INT] EPOCH_NUMBER
View network parameters. EPOCH_NUMBER can be latest
or valid epoch number (not later than the current one), ie., 0
, 1
, .. .
> cardano-wallet network parameters latest
network clock
> cardano-wallet network clock
View NTP offset for cardano-wallet server in microseconds.
> cardano-wallet network clock
Ok.
{
"status": "available",
"offset": {
"quantity": -30882,
"unit": "microsecond"
}
}
At this stage the command is not supported on Windows platform. Invoked on Windows will return status: unavailable
in the response message.
key from-recovery-phrase
Extract the root extended private key from a recovery phrase. New recovery phrases can be generated using recovery-phrase generate
.
Usage: cardano-wallet key from-recovery-phrase ([--base16] | [--base58] | [--bech32]) STYLE
Convert a recovery phrase to an extended private key
Available options:
-h,--help Show this help text
STYLE Byron | Icarus | Jormungandr | Shelley
The recovery phrase is read from stdin.
Example:
> cardano-wallet recovery-phrase generate | cardano-wallet key from-recovery-phrase Icarus
xprv12qaxje8hr7fc0t99q94jfnnfexvma22m0urhxgenafqmvw4qda0c8v9rtmk3fpxy9f2g004xj76v4jpd69a40un7sszdnw58qv527zlegvapwaee47uu724q4us4eurh52m027kk0602judjjw58gffvcqzkv2hs
key child
Derive child key from root private key. The parent key is read from standard input.
Usage: cardano-wallet key child ([--base16] | [--base58] | [--bech32]) [--legacy] DERIVATION-PATH
Derive child keys from a parent public/private key
Available options:
-h,--help Show this help text
DERIVATION-PATH Slash-separated derivation path.
Hardened indexes are marked with a 'H' (e.g. 1852H/1815H/0H/0).
The parent key is read from stdin.
Example:
> cardano-wallet recovery-phrase generate | cardano-wallet key from-recovery-phrase Icarus > root.xprv
cat root.xprv | cardano-wallet key child 44H/1815H/0H/0
xprv13parrg5g83utetrwsp563w7hps2chu8mwcwqcrzehql67w9k73fq8utx6m8kgjlhle8claexrtcu068jgwl9zj5jyce6wn2k340ahpmglnq6x8zkt7plaqjgads0nvmj5ahav35m0ss8q95djl0dcee59vvwkaya
key public
Extract the public key of an extended private key. Keys can be obtained using key from-recovery-phrase
and key child
.
Usage: cardano-wallet-jormungandr key public ([--base16] | [--base58] | [--bech32])
Get the public counterpart of a private key
Available options:
-h,--help Show this help text
The private key is read from stdin.
Example:
> cardano-wallet recovery-phrase generate | cardano-wallet key from-recovery-phrase Icarus > root.xprv
cat root.xprv | cardano-wallet key public
xpub1le8gm0m5cesjzzjqlza4476yncp0yk2jve7cce8ejk9cxjjdama24hudzqkrxy4daxwmlfq6ynczj338r7f5kzs43xs2fkmktekd4fgnc8q98
key inspect
Usage: cardano-wallet-jormungandr key inspect
Show information about a key
Available options:
-h,--help Show this help text
The parent key is read from stdin.
stake-pool list
Usage: cardano-wallet stake-pool list [--port INT] [--stake STAKE]
List all known stake pools.
Available options:
-h,--help Show this help text
--port INT port used for serving the wallet API. (default: 8090)
--stake STAKE The stake you intend to delegate, which affects the
rewards and the ranking of pools.
version
> cardano-wallet version
Show the software version.
Bash Shell Command Completion
:gift_heart: For bash/zsh auto-completion, put the following script in your /etc/bash_completion.d
:
# /etc/bash_completion.d/cardano-wallet.sh
_cardano-wallet()
{
local CMDLINE
local IFS=$'\n'
CMDLINE=(--bash-completion-index $COMP_CWORD)
for arg in ${COMP_WORDS[@]}; do
CMDLINE=(${CMDLINE[@]} --bash-completion-word $arg)
done
COMPREPLY=( $(cardano-wallet "${CMDLINE[@]}") )
}
complete -o filenames -F _cardano-wallet cardano-wallet
HTTP API Documentation
Hardware recommendations
As cardano-wallet runs side by side with cardano-node, the hardware requirements for cardano-wallet would largely depend on the hardware requirements for cardano-node. Current hardware requirements for cardano-node are published on cardano-node's release page. For most cases cardano-node's hardware requirements are good enough to cover requirements of running cardano-wallet as well.
Here are some general hardware recommendations for running cardano-wallet:
- Processor: A multicore processor with a clock speed of at least 1.6 GHz or faster is recommended, but a faster processor is better for optimal performance.
- Memory: A minimum of 4 GB of RAM is recommended, although more is better.
- Storage: A minimum of 5 GB of free storage for wallet data is recommended.
- Network: A stable internet connection with at least 5 Mbps upload and download speeds is recommended for syncing the blockchain data and performing transactions.
Again, these are general recommendations and the actual hardware requirements may vary depending on factors such as the number and size of wallets being managed and the specific usage patterns of the software. In particular the above requirements are good enough to handle fairly large wallet having 15k addresses and 15k transactions in history. Smaller wallets would not require as much resources but larger would require more.
Wallet security considerations
The cardano-wallet HTTP service is designed to be used by trusted users only. Any other use is not supported or planned .
In order to ensure that only trusted users may access the HTTP service, cardano-wallet uses TLS client certificate authentication. For example, this is how the Daedalus wallet frontend ensures that only this frontend can access the cardano-wallet API. In other words, trust is established through a TLS client certificate. Such certificates need to be placed in the disk storage used by the cardano-wallet process before the HTTP service is started.
It’s worth mentioning that a trusted user can attack the wallet through the HTTP service in many ways, they can also view sensitive information, delete a wallet’s store, etc. Thus, as soon as an attacker is able to become a trusted user.
It’s also worth mentioning that a trusted user that can access the HTTP API is not able to spend funds of the wallet without gaining access to additional information such as the passphrase or the wallet secret key. TLS prevents eavesdropping on the passphrase, and the wallet secret key is encrypted by the passphrase.
Integrations
A non-exhaustive list of applications and libraries which use the cardano-wallet
backend.
Elixir
ricn/cardanoex
- an Elixir library for accessing thecardano-wallet
REST API.
Go
echovl/cardano-go
- a Go library for creating go applicactions that interact with the Cardano Blockchain as well as a CLI to manage Cardano Wallets.godano/cardano-wallet-client
- a Go library for accessing thecardano-wallet
REST API.
Ruby
piotr-iohk/cardano-wallet-rb
- a Ruby Gem for accessing thecardano-wallet
REST API.piotr-iohk/ikar
- a helper web app to interact withcardano-wallet
.
Scala/Java
input-output-hk/psg-cardano-wallet-api
- a Scala and Java client for the Cardano Wallet API.uniVocity/envlp-cardano-wallet
- a Java client for accessing thecardano-wallet
REST API.- (
bloxbean/cardano-client-lib
- a client library for Cardano in Java, currently integrated through Blockfrost, but planning on to integrate with cardano-wallet )
TypeScript/JavaScript
input-output-hk/daedalus
- a graphical Cardano Wallet user interface using Electron and the React Polymorph framework.IntersectMBO/cardano-launcher
- a library for configuring, starting and stoppingcardano-wallet
andcardano-node
.tango-crypto/cardano-wallet-js
- a Typescript/Javascript module for accessing thecardano-wallet
REST API.
Unknown
Medusa AdaWallet
- a web-based Cardano Wallet
EKG and Prometheus Metrics
It is possible to enable EKG and Prometheus monitoring on cardano-wallet server, by setting environment variables that configure ports and host names for those services:
CARDANO_WALLET_EKG_PORT
CARDANO_WALLET_PROMETHEUS_PORT
CARDANO_WALLET_EKG_HOST
CARDANO_WALLET_PROMETHEUS_HOST
Monitoring is disabled by default. It is enabled by setting CARDANO_WALLET_EKG_PORT
and/or CARDANO_WALLET_PROMETHEUS_PORT
respectively.
Enabling monitoring
To enable monitoring one can simply set environment variables with cardano-wallet serve
command as follows:
> CARDANO_WALLET_EKG_PORT=8070 \
CARDANO_WALLET_PROMETHEUS_PORT=8080 \
cardano-wallet serve --port 8090 \
--node-socket /path_to/cardano-node.socket \
--mainnet \
--database ./wallet-db
In order to see EKG GC and memory statistics
start wallet with
cardano-wallet +RTS -T -RTS <other-args>
Following the example above metrics would be available in localhost
under corresponding ports:
- EKG: http://localhost:8070
> curl -H "Accept: application/json" http://localhost:8070/ | jq
{
"iohk-monitoring version": {
"type": "l",
"val": "0.1.10.1"
},
"ekg": {
"server_timestamp_ms": {
"type": "c",
"val": 1606997751752
}
},
"rts": {
"gc": {
"gc_cpu_ms": {
"type": "c",
"val": 0
},
Prometheus metrics
> curl http://localhost:8080/metrics
cardano_wallet_metrics_Stat_rtpriority_int 0
cardano_wallet_metrics_Stat_itrealvalue_int 0
rts_gc_par_max_bytes_copied 0
cardano_wallet_metrics_IO_syscr_int 3722
cardano_wallet_metrics_Stat_minflt_int 6731
cardano_wallet_metrics_Stat_cminflt_int 0
Binding monitoring
By default both EKG and Prometheus monitoring is bound to localhost
. One can bind it to different hostname or external ip using:
CARDANO_WALLET_EKG_HOST
CARDANO_WALLET_PROMETHEUS_HOST
For instance:
> CARDANO_WALLET_EKG_PORT=8070 \
CARDANO_WALLET_PROMETHEUS_PORT=8080 \
CARDANO_WALLET_EKG_HOST=0.0.0.0 \
CARDANO_WALLET_PROMETHEUS_HOST=0.0.0.0 \
cardano-wallet serve --port 8090 \
--node-socket /path_to/cardano-node.socket \
--mainnet \
--database ./wallet-db
Plutus Application Backend
Pre-requisites
- Install Plutus Application Backend.
- In order to be able to balance Plutus transaction we need funds on the wallet. In case of Testnet we can request tADA from the faucet.
Overview
This guide is to show how to invoke Plutus contracts with cardano-wallet.
Workflow
Once you have created a smart contract with PAB you can execute it via cardano-wallet.
There are three endpoints that need to be invoked to follow the workflow:
- POST /wallets/{walletId}/transactions-balance - for balancing transaction from PAB.
- POST /wallets/{walletId}/transactions-sign - for signing transaction.
- POST /wallets/{walletId}/transactions-submit - for submitting transaction to the network.
Balance transaction
Plutus Application Backend provides a payload of an unbalanced transaction. This transaction needs to be balanced with wallet's inputs such that it can be submitted to the network. The response from this endpoint returns fee
and coin_selection
of the balanced transaction as well as CBOR-encoded transaction
represented in base64 encoding. We will need the returned transaction
value to pass on to POST /wallets/{walletId}/transactions-sign endpoint.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-balance \
-d '{"transaction":"84a500800d80018183581d704d72cf569a339a18a7d9302313983f56e0d96cd45bdcb1d6512dca6a1a001e84805820923918e403bf43c34b4ef6b48eb2ee04babed17320d8d1b9ff9ad086e86f44ec02000e80a10481d87980f5f6","redeemers":[],"inputs":[]}' \
-H "Content-Type: application/json" | jq .transaction
hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBAw2AAYKDWB1wTXLPVpozmhin2TAjE5g/VuDZbNRb3LHWUS3KahoAHoSAWCCSORjkA79Dw0tO9rSOsu4Eur7RcyDY0bn/mtCG6G9E7IJYOQDsKgV69YfvMZdbfIT11OqtWL9bv7n++Jx0f+TDFwodcRLCv14tOV5BoBhV6ODYaS1MNdWnvKCjgBq2bfNOAhoAApgxDoALWCAvUOolRvjOAgykW/zyq+sC/xivIoNGb4iK5IkYSz0tOaEEgdh5gPX2
Sign transaction
Once the transaction is balanced we need to sign it using our wallet's secure passphrase and pass the previously returned CBOR-encoded transaction
. The sign endpoint will again return CBOR-encoded transaction
which is needed to be passed further to POST
/wallets/{walletId}/transactions-submit endpoint.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-sign \
-d '{"passphrase":"Secure Passphrase",
"transaction":"hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBAw2AAYKDWB1wTXLPVpozmhin2TAjE5g/VuDZbNRb3LHWUS3KahoAHoSAWCCSORjkA79Dw0tO9rSOsu4Eur7RcyDY0bn/mtCG6G9E7IJYOQDsKgV69YfvMZdbfIT11OqtWL9bv7n++Jx0f+TDFwodcRLCv14tOV5BoBhV6ODYaS1MNdWnvKCjgBq2bfNOAhoAApgxDoALWCAvUOolRvjOAgykW/zyq+sC/xivIoNGb4iK5IkYSz0tOaEEgdh5gPX2"}' \
-H "Content-Type: application/json" | jq .transaction
hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBAw2AAYKDWB1wTXLPVpozmhin2TAjE5g/VuDZbNRb3LHWUS3KahoAHoSAWCCSORjkA79Dw0tO9rSOsu4Eur7RcyDY0bn/mtCG6G9E7IJYOQDsKgV69YfvMZdbfIT11OqtWL9bv7n++Jx0f+TDFwodcRLCv14tOV5BoBhV6ODYaS1MNdWnvKCjgBq2bfNOAhoAApgxDoALWCAvUOolRvjOAgykW/zyq+sC/xivIoNGb4iK5IkYSz0tOaIAgYJYIAVUtVUzi4FodRYiBmO9mD5hQGo2YjjYDoCgw5gn+/w9WEAyVhMWNiK88QKW6HBXIVxQyu0E+9epkIQbCQwNjKur5ORLojHxIZtZDDfkT6caz0yxp92t4Y7rDwsDw4geMOkJBIHYeYD19g==
Submit transaction
We have our balanced and signed CBOR-encoded transaction
represented in base64 encoding. Now we can submit it to the network with POST
/wallets/{walletId}/transactions-submit endpoint.
> curl -X POST http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions-submit \
-d '{"transaction":"hKYAgYJYIJs6ATvbNwo5xpOkjHfUzr9Cv4zuFLFicFwWwpPC4ltBAw2AAYKDWB1wTXLPVpozmhin2TAjE5g/VuDZbNRb3LHWUS3KahoAHoSAWCCSORjkA79Dw0tO9rSOsu4Eur7RcyDY0bn/mtCG6G9E7IJYOQDsKgV69YfvMZdbfIT11OqtWL9bv7n++Jx0f+TDFwodcRLCv14tOV5BoBhV6ODYaS1MNdWnvKCjgBq2bfNOAhoAApgxDoALWCAvUOolRvjOAgykW/zyq+sC/xivIoNGb4iK5IkYSz0tOaIAgYJYIAVUtVUzi4FodRYiBmO9mD5hQGo2YjjYDoCgw5gn+/w9WEAyVhMWNiK88QKW6HBXIVxQyu0E+9epkIQbCQwNjKur5ORLojHxIZtZDDfkT6caz0yxp92t4Y7rDwsDw4geMOkJBIHYeYD19g=="}' \
-H "Content-Type: application/json"
{"id":"c287cd5a752ff632e07747109193ed8f8b8e446211563951e7f8e470ed859782"}
We can monitor status of the submitted transaction using GET /wallets/{walletId}/transactions/{transactionId} endpoint.
> curl -X GET http://localhost:8090/v2/wallets/1b0aa24994b4181e79116c131510f2abf6cdaa4f/transactions/c287cd5a752ff632e07747109193ed8f8b8e446211563951e7f8e470ed859782 | jq
{
"status": "in_ledger",
...
"amount": {
"quantity": 2170033,
"unit": "lovelace"
},
"inserted_at": {
"height": {
"quantity": 3324947,
"unit": "block"
},
"epoch_number": 187,
"time": "2022-02-16T11:22:12Z",
"absolute_slot_number": 50641316,
"slot_number": 226916
},
...
FAQ
Why aren't my unused addresses imported when I restore a wallet?
This is by virtue of the blockchain. An unused address is by definition unused. Meaning that is doesn't exist on the chain and only exists locally, in the context of the software that has generated it. Different software may use different rules to generate addresses. For example in the past, cardano-sl wallets used a method called random derivation where addresses were created from a root seed and a random index stored within the address itself. Because these indexes were random, it was not possible to restore randomly generated addresses which hadn't been used on chain yet!
More recently, cardano-wallet
has been using sequential derivation which follows a very similar principle with the major difference that indexes are derived in sequence, starting from 0. Following this method, wallets aren't allowed to pre-generate too many addresses in advance. As a consequence, it is now possible to restore a wallet across many machines while keeping a very consistent state.
I’ve noticed that other blockchains create accounts for wallets?
There are two sides to this question. Either, you are referring to accounts as in Ethereum accounts, or you may refer to accounts of hierarchical deterministic wallets.
In the first scenario, assets in the form of accounts are only supported in the Shelley era of Cardano and only for a specific use-case: rewards. Rewards are indeed implicitly published on the blockchain to mitigate the risk of flooding the network at the end of every epoch with rewards payouts! Hence, each core node keeps track of the current value of each reward account in a Ledger. Money can be withdrawn from this account and is then turned as a UTxO. Please note that funds can never be manually sent to a reward account. The creation of a reward account is done when registering a staking key, via a specific type of transaction.
In the second case, please refer to the hierarchical deterministic wallets section in the Key concepts. Cardano wallets typically follow an HD tree of derivation as described in this section.
It seems like I have to install and configure many APIs and libraries, what is the fastest and most simple way to do this at once?
🐳 docker is your friend here! Every component is packaged as docker images. Releases are tagged and the very edge is always accessible. See the various docker guides on the components' repository, and also how to compose services using docker-compose.
Is there a reason why I would have to build from src?
If you intend to contribute to Cardano by making code changes to one of the core components, then yes. We recommend using cabal for a better developer experience.
If you only intend to use the services as-is then, using either the pre-compiled release artifacts for your appropriate platform or a pre-packaged docker image is preferable.
Where is the faucet and do I get test ADA?
- https://testnets.cardano.org/en/testnets/cardano/tools/faucet/
Wallet Backend Specifications
Where do the various notations come from?
Like often in Maths, notations are described within the context of the paper with some a priori hypotheses. For the Wallet Backend specifications, the notation is inspired from the Z notation in a slightly more lightweight form.
What is dom
from Lemma 2.1
There are multiple occurrences in the spec of expressions like: (dom u ∩ ins) ◃ u
. The meaning of dom u
isn't quite clearly defined anywhere but refers to the set of keys from the mapping defined by u: txin ↦ txout
. Hence, dom u
refers to all txin
available in u
.
In Haskell, this translates to:
newtype UTxO = UTxO (Map TxIn TxOut)
dom :: UTxO -> Set TxIn
dom (UTxO utxo) = Set.fromList $ Map.keys utxo
How do I interpret (Ix -> TxOut)
in the definition of Tx
in fig. 1?
In the current wallet implementation it corresponds to NonEmpty TxOut
.
Are we going to update the formal specification?
Some elements of the specification are written according to the current wallet implementation. Some parts could be simplified or removed, in particular the bits concerning a few metadata that we won't be implementing until a need for them is made clear. A few bits are also missing from the specifications (like the fact that answering isOurs
is a stateful operation when dealing with BIP-44, or also, foreign transactions coming from ADA certificates redemption). In the long run, we do want to have the specification updated and proved.
Address Derivation à la BIP-44
Are we going to support the old Random derivation scheme forever?
Short answer: yes. Though, we don't necessarily have to support a full set of features for wallets using an old derivation scheme in order to encourage users to migrate to the sequential scheme (a.k.a BIP-44). Most probably, we will forever have to support the old derivation scheme and a few features like tracking of the wallet UTxO and balance, and, allowing funds to be migrated to a wallet using the sequential scheme.
Coin selection
How many outputs can a single transaction have?
It depends. To make a transaction, our implementation currently select UTxO from the available UTxO in the wallet in order to cover for the output requested in a transaction. For every output, the wallet actually creates two outputs:
- The actual output to a target address
- A change output to a change address of the source wallet
Also, in practice, we strive to make these two outputs relatively equivalent in size, such that one cannot easily guess the change output from the actual one by looking at the transaction; enforcing therefore some privacy for users.
Incidentally, we do consider every requested output in a transaction as an independent problem. This means that a single UTxO can only cover for one of the output (and will in practice, tend to be twice as big, such that we can generate an equivalent corresponding change output). As a consequence, in order to make a transaction to three target outputs, one needs to have at least three UTxOs that are big enough to cover all three outputs independently.
Finally, it's important to notice that the fee calculation runs after the coin selection and is divvied across all change outputs. So in practice, the UTxOs only need to independently cover for outputs, but are considered together when adjusting for fees.
A few examples to make this more concrete (in the scenario below, fees are ~180000
):
// Both UTxOs can separately cover fee and outputs
Available UTxO: [200000, 200000]
Payment Distribution: [14, 42]
Result: Ok
// 2 UTxOs, each cannot separately cover fee and outputs, but jointly can
Available UTxO: [100000, 100000]
Payment Distribution: [14, 42]
Result: Ok
// Single UTxOs, big enough to cover for total requested amount & fee, but multiple outputs
Available UTxO: [400000]
Payment Distribution: [14, 42]
Result: Error - UTxO not enough fragmented
What is the security issue with re-using addresses?
In practice, there's none.
Miscellaneous
How do I write a question in this FAQ?
Use the <details>
and <summary>
html tags. The <summary>
are nested inside the <details>
tag
and the question goes within the <summary>
tag as a body. The answer goes below, and can contain any
arbitrary markdown or HTML supported / allowed by GitHub. This produces a nice, readable output.
e.g.
<details>
<summary>What is love?</summary>
Baby don't hurt me.
</details>
Design Documents
Information about the design of the wallet.
This includes:
- Documentation of internal design decisions.
- Concepts and terminology.
- Specifications of user-facing APIs.
- Prototypes.
Architecture Diagram
flowchart TB; subgraph local system Daedalus -- calls --> cardano-launcher; cardano-launcher -. spawns .-> cardano-wallet; cardano-launcher -. spawns .-> cardano-node; %% by HTTP REST API Daedalus -- queries --> cardano-wallet; %% Local socket/named pipe cardano-wallet -- queries --> cardano-node; end subgraph Internet %% HTTP API cardano-wallet -- queries --> SMASH; %% Network protocol cardano-node -- syncs --> blockchain; end class cardano-wallet adrestia; class Daedalus,cardano-launcher daedalus; class cardano-node,SMASH cardano; class blockchain other; click cardano-wallet call mermaidClick("wallet-backend"); click cardano-launcher mermaidClick; click cardano-node call mermaidClick("node"); click SMASH call mermaidClick("smash"); click Daedalus call mermaidClick("daedalus");
This is how the software components fit together in the Daedalus scenario.
See also: Adrestia Architecture.
Node
The core cardano-node, which participates in the Cardano network, and maintains the state of the Cardano blockchain ledger.
Wallet Backend
cardano-wallet is an HTTP REST API is recommended for 3rd party wallets and small exchanges who do not want to manage UTxOs for transactions themselves. Use it to send and receive payments from hierarchical deterministic wallets on the Cardano blockchain via HTTP REST or a command-line interface.
Cardano Launcher
cardano-launcher is a TypeScript package which handles the details of starting and stopping the Node and Wallet Backend.
Daedalus
Daedalus is a user-friendly desktop application to manage Cardano wallets.
SMASH
SMASH is a proxy for stake pool metadata.
Adrestia Architecture
Adrestia is a collection of products which makes it easier to integrate with Cardano.
It comes in different flavours: SDK or high-level APIs. Depending on the use-cases you have and the control that you seek, you may use any of the components below.
Services
Service applications for integrating with Cardano.
- cardano-wallet: HTTP REST API for managing UTxOs, and much more.
- cardano-graphql: HTTP GraphQL API for exploring the blockchain.
- cardano-rosetta: Rosetta implementation for Cardano.
- cardano-submit-api: HTTP API for submitting signed transactions.
Software Libraries
- cardano‑addresses: Address generation, derivation & mnemonic manipulation.
- cardano-coin-selection: Algorithms for coin selection and fee balancing.
- cardano-transactions: Utilities for constructing and signing transactions.
- bech32: Haskell implementation of the Bech32 address format (BIP 0173).
High-Level Dependency Diagram
flowchart TB cardano-submit-api --> cardano-node; cardano-wallet --> cardano-node; cardano-db-sync --> cardano-node; cardano-db-sync-- writes --> PostgreSQL[(PostgreSQL)]; SMASH-- reads --> PostgreSQL; cardano-graphql-- reads --> PostgreSQL; cardano-rosetta-- reads --> PostgreSQL; cardano-explorer[cardano-explorer fab:fa-react] --> cardano-graphql; cardano-wallet --> SMASH; daedalus[Daedalus fab:fa-react] --> cardano-wallet; click cardano-submit-api mermaidClick; click cardano-wallet mermaidClick; click cardano-db-sync mermaidClick; click SMASH mermaidClick; click cardano-graphql mermaidClick; click cardano-rosetta mermaidClick; click cardano-explorer mermaidClick; click PostgreSQL href "https://postgresql.org"; click daedalus href "https://github.com/input-output-hk/daedalus"; %% Styles for these classes are in static/adrestia.css. class cardano-wallet,cardano-explorer,cardano-graphql,cardano-rosetta adrestia; class cardano-node,cardano-submit-api,cardano-db-sync,SMASH cardano; class daedalus daedalus; class PostgreSQL other;
Component Synopsis
cardano-node
The core cardano-node, which participates in the Cardano network, and maintains the state of the Cardano blockchain ledger.
Cardano Network Protocol
An implementation of the protocol is here, deployed as stake pool nodes and relay nodes to form the Cardano network.
cardano-wallet
cardano-wallet An HTTP REST API is recommended for 3rd party wallets and small exchanges who do not want to manage UTxOs for transactions themselves. Use it to send and receive payments from hierarchical deterministic wallets on the Cardano blockchain via HTTP REST or a command-line interface.
cardano‑launcher
This is a small Typescript package for NodeJS applications which manages the configuration and lifetime of cardano-wallet and cardano-node processes.
cardano-db-sync
This application stores blockchain data fetched from cardano-node in a PostgreSQL database to enable higher-level interfaces for blockchain exploration. It powers cardano-graphql.
cardano-graphql
A GraphQL API for Cardano, which also serves as the backend of Cardano Explorer.
cardano-submit-api
A small HTTP API for submitting transactions to a local cardano-node.
The transaction must be fully signed and CBOR-encoded. This could be done by cardano-cli, for example.
cardano-rosetta
Cardano-rosetta is an implementation of the Rosetta specification for Cardano. Rosetta is an open-source specification and set of tools that makes integrating with blockchains simpler, faster, and more reliable.
SMASH
The Stakepool Metadata Aggregation Server [for Hashes] is basically a proxy of the metadata published by stake pool owners. It improves performance of the network by taking load off the various web servers which host the actual metadata.
Clients such as cardano-wallet
must verify the integrity of metadata served by a SMASH server by comparing the metadata's content hash with that in the stake pool registration certificate.
Component Relationships: Explorer Scenario
erDiagram CARDANO-NODE ||--|{ CARDANO-SUBMIT-API : connects-to CARDANO-NODE ||--|{ CARDANO-DB-SYNC : depends-on CARDANO-DB-SYNC ||--|| POSTGRESQL : dumps-into POSTGRESQL ||--|| SMASH : is-queried POSTGRESQL ||--|| CARDANO-GRAPHQL : is-queried POSTGRESQL ||--|| CARDANO-ROSETTA : is-queried CARDANO-GRAPHQL ||--|{ EXPLORER : depends-on
Component Relationships: Wallet Scenario
flowchart TB subgraph local system Daedalus -- calls --> cardano-launcher; cardano-launcher -. spawns .-> cardano-wallet; cardano-launcher -. spawns .-> cardano-node; %% by HTTP REST API Daedalus -- queries --> cardano-wallet; %% Local socket/named pipe cardano-wallet -- queries --> cardano-node; end subgraph Internet %% HTTP API cardano-wallet -- queries --> SMASH; %% Network protocol cardano-node -- syncs --> blockchain; end class cardano-wallet adrestia; class Daedalus,cardano-launcher daedalus; class cardano-node,SMASH cardano; class blockchain other; click cardano-wallet mermaidClick; click cardano-launcher mermaidClick; click cardano-node mermaidClick; click SMASH mermaidClick; click Daedalus href "https://github.com/input-output-hk/daedalus"; click blockchain call mermaidClick("cardano-network-protocol");
Components
APIs
Name / Link | Description | Byron | Jörm | Shelley | Mary | Alonzo |
---|---|---|---|---|---|---|
cardano-wallet | JSON/REST API for managing UTxOs in HD wallets | ✔ | ✔ | ✔ | ✔ | ✔ |
cardano-graphql | GraphQL/HTTP API for browsing on-chain data | ✔ | ❌ | ✔ | ✔ | ✔ |
cardano-rosetta | Implementation of Rosetta spec for Cardano | ✔ | ✔ | 🚧 | ||
Deprecated | ✔ | ❌ | ✔ | ❌ | ❌ |
CLIs
Name / Link | Description | Byron | Jörm | Shelley | Mary | Alonzo |
---|---|---|---|---|---|---|
bech32 | Human-friendly Bech32 address encoding | N/A | ✔ | ✔ | ✔ | ✔ |
cardano-wallet | Command-line for interacting with cardano-wallet API | ✔ | ✔ | ✔ | ✔ | ✔ |
cardano‑addresses | Addresses and mnemonic manipulation & derivations | ✔ | ✔ | ✔ | ✔ | ✔ |
Deprecated | ✔ | ❌ | ❌ | ❌ | ❌ |
Haskell SDKs
Name / Link | Description | Byron | Jörm | Shelley | Mary | Alonzo |
---|---|---|---|---|---|---|
bech32 | Human-friendly Bech32 address encoding | ✔ | ✔ | ✔ | ✔ | |
cardano‑addresses | Addresses and mnemonic manipulation & derivations | ✔ | ✔ | ✔ | ✔ | ✔ |
Deprecated | ✔ | ✔ | ✔ | ❌ | ❌ | |
Deprecated | ✔ | ❌ | ❌ | ❌ | ❌ |
Rust SDKs (+WebAssembly support)
Name / Link | Description | Byron | Jörmungandr | Shelley |
---|---|---|---|---|
cardano-serialization-lib | Binary serialization of on-chain data types | N/A | N/A | ✔ |
react-native-haskell-shelley | React Native bindings for cardano-serialization-lib | N/A | N/A | 🚧 |
JavaScript SDKs
Name / Link | Description | Byron | Jörm | Shelley | Mary | Alonzo |
---|---|---|---|---|---|---|
cardano‑launcher | Typescript library for starting and stopping cardano-wallet and cardano-node | ❌ | ✔ | ✔ | ✔ | |
cardano‑addresses | Address validation and inspection | ✔ | ✔ | ✔ | ✔ | ✔ |
Formal Specifications
Name / Link | Description |
---|---|
utxo-wallet-specification | Formal specification for a UTxO wallet |
Internal
These tools are used internally by other tools and does not benefit from the same care in documentation than other tools above.
Name / Link | Description |
---|---|
persistent | Fork of the persistent Haskell library maintained for cardano-wallet |
Other Resources
Cardano Node
- User Guide
- Engineering Design Specification for Delegation and Incentives
- A Formal Specification of the Cardano Ledger
- Networking Protocol(s)
- Repository: IntersectMBO/ouroboros-network, includes specifications
- Specification: The Shelley Networking Protocol (Version 1.3.0, 16th July 2021)
- Low-Level Specifications
Plutus
Emurgo
Bitcoin
- BIP 0032 - Hierarchical Deterministic Wallets
- BIP 0039 - Mnemonic Code for Generating Deterministic Keys
- BIP 0044 - Multi-Account Hierarchy for Deterministic Wallets
- BIP 0173 - Base32 Address Format
Concepts
This section describes the concepts and terminology used in the Cardano Wallet API.
Cardano Eras
A blockchain "hard fork" is a change to the block producing software such that any new blocks produced would be invalid according to the old unchanged software. Therefore, the unchanged software would not accept these blocks into its chain. If nodes running the new and modified software continue to produce blocks on top of their chain, a "fork" will result, starting at the parent block of the first incompatible block.
Cardano manages hard forks by gating the new block generation code behind a feature flag. The network agrees upon a slot at which all nodes will enable the new block generation code. After that slot, the network is said to be operating in a new "era."
Agreement on the slot number (epoch really) to switch to the new era happens by an on-chain voting protocol. If the voting proposal is carried, operators of nodes must ensure that their software version will be compatible with the new era, before the hard fork occurs.
Terminology
One might ask why the eras listed in Cardano.Api don't all appear in the Cardano Roadmap.
The roadmap is actually divided into phases, under which one or more eras (hard forks) may occur.
The Hard Fork Combinator
The mechanism whereby the Cardano Node operates in different modes - depending on which slot it is up to - is called the "Hard Fork Combinator".
The following seminars (company internal use only) are a good introduction:
Date | Title | Description |
---|---|---|
2020/05/15 | Hardfork Combinator I: Reconciling The Eras | This session will explain the technology behind the fabled hardfork combinator for Cardano. As this is quite a bit of machinery, and central to Shelley, we will take the time to talk about it in two sessions, to give enough background that everyone can follow along. This first session will focus on the combinator itself, and how advanced Haskell features allow us to travel from one typed world to another. In a future session, which yet needs to be scheduled, he will present on the subtleties of treating time in the presence of a hardfork. |
2020/06/05 | Hard Fork Combinator II: Time for a Hard Fork | As any computer scientist, physicist, or cross-timezone remote worker can testify, time is a tricky and unforgiving beast. Cardano, as a slot-based Proof of Stake system, is inherently time-driven, so it is vital to get time right. Across the hardfork from Byron to Shelley, the way in which time is divided into slots will change, which turned out to be a massive complicating factor. In this presentation, Edsko will show us how the hard fork combinator handles time across the fork point. |
In a nutshell, pretty much all types in Cardano.Api
are indexed by the Era
. The node will provide a TimeInterpreter
object by the LocalStateQuery
protocol, which allows cardano-wallet
to calculate what time a certain slot occurs at. Calculating the time of a given slot number is more complicated than it sounds.
Configuration
Both cardano-node
and cardano-wallet
have command-line parameters and/or configuration options for the hard-fork combinator.
Genesis
Nodes must be initialised with a genesis config. The genesis config defines the initial state of the chain. There is the Byron era genesis obviously, and some subsequent eras (Shelley and Alonzo, at the time of writing) also have their own genesis.
Only the hash of a genesis config is stored on-chain. For mainnet, the genesis hash is used as the previous block header hash for the Epoch Boundary Block of epoch 0.
When cardano-wallet
connects to its local cardano-node
, it must provide the Byron genesis config hash. Hashes for subsequent genesis configs are contained on-chain within the voting system update proposals.
For mainnet, we have hardcoded the genesis hash. It will never change. For testnet, users must supply a genesis config to the cli , so that its hash may be used when connecting to the local node.
Epoch Boundary Blocks
Epoch Boundary Blocks (EBBs) only existed in the Byron era, prior to their abolition with the Shelley hard fork. Confusingly, they were also referred to as "genesis" blocks. The EBB is the previous block of the first block in a Byron epoch. EBBs were calculated locally on each full node of mainnet, and were never transmitted across the network between nodes.
Instafork™ Technology
Hard forking to Alonzo (the latest era, at the time of writing) by natural methods requires several epochs' worth of network operation time to take effect. For disposable local testnets, this would delay startup and make our integration tests slower.
Therefore, cardano-node
can be configured to hard fork itself at the start of any given epoch number, without an update proposal. It's also possible to hard fork directly into all the eras at epoch 0. This is what we use for the integration tests local-cluster
. The cardano-node
configuration for this is:
TestShelleyHardForkAtEpoch: 0
TestAllegraHardForkAtEpoch: 0
TestMaryHardForkAtEpoch: 0
TestAlonzoHardForkAtEpoch: 0
Historical note: "Cardano Mode"
In theory, it is possible to run cardano-node
in a single-era mode, where the Hard Fork Combinator is completely disabled. This is called running in Byron-only or Shelley-only mode. Then there is full "Cardano Mode", which enables the Hard Fork Combinator, thereby allowing the software to pass through eras.
For cardano-cli
, there are --byron-mode
, --shelley-mode
, and --cardano-mode
switches, to tell the network client which mode the local node is using.
Cardano Mode is the default. I'm not aware of too many people using single-era modes for anything much.
Specifications
Formal specifications (and the actual implementations!) for each Cardano era exist in the IntersectMBO/cardano-ledger repo. These documents are surprisingly readable, and are our go-to source for answers about how the node operates. See the README.md
for links to PDFs and/or build instructions.
Decentralized Update Proposals
The input-output-hk/decentralized-software-updates repo contains research materials for a future implementation of decentralized software updates.
Recovery Phrases
Recovery phrases (also known as mnemonics) provide a way for humans to easily read and write down arbitrary large numbers which would otherwise be very difficult to remember.
They use a predefined dictionary of simple words (available in many different languages) which map uniquely back and forth to a binary code.
Seeds / Entropy
The recovery phrase is normally used to encode a wallet's "seed" - a secret random number which is 28 decimal digits long, or more!
A seed is also called just entropy1, implying that it is a sequence of bytes which has been generated using high-quality randomness methods.
All keys belonging to a wallet address-derivation from the wallet seed.
See also the definition of "entropy" from information theory.
Encoding
The process for encoding recovery phrases is described in BIP-0039 § Generating the mnemonic. Below is a reformulation of this specification.
The allowed size of entropy is 96-256 bits and must be multiple of 32 bits (4 bytes).
A checksum is appended to the initial entropy by taking the first $|ent| / 32$ bits of the SHA-256 hash of it, where $|ent|$ designates the entropy size in bits.
Then, the concatenated result is split into groups of 11 bits, each encoding a number from 0 to 2047 serving as an index into a known dictionary (see below).
Sentence Length | Entropy Size | Checksum Size |
---|---|---|
9 words | 96 bits (12 bytes) | 3 bits |
12 words | 128 bits (16 bytes) | 4 bits |
15 words | 160 bits (20 bytes) | 5 bits |
18 words | 192 bits (24 bytes) | 6 bits |
21 words | 224 bits (28 bytes) | 7 bits |
24 words | 256 bits (32 bytes) | 8 bits |
Dictionaries
Cardano uses the same dictionaries as defined in BIP-0039.
Example
This is an English recovery phrase, ordered left-to-right, then top-to-bottom.
write maid rib
female drama awake
release inhale weapon
crush mule jump
sound erupt stereo
It is 15 words long, so $15\times11 = 165$ bits of information, which is split into a 160 bit seed and 5 bit checksum.
Using the dictionary, these words resolve to:
2036 1072 1479
679 529 129
1449 925 1986
424 1162 967
1662 615 1708
Which is:
Seed:
01111100111 11100100110 01111101011
01010100111 01000010001 00010000001
10110101001 01110011101 11111000010
00110101000 10010001010 01111000111
11001111110 01001100111 110101
Checksum: 01100
Seed (base16): fe90c2e3aa7422206d4b9df846a2453c7cfc99f5
Checksum (base16): 0c
Master Key Generation
Master key generation is the process by which the wallet turns a recovery-phrases (entropy) into a secure cryptographic key. Child keys can be derived from a master key to produce a derivation tree structure as outlined in hierarchical-deterministic-wallets.
In Cardano, the master key generation algorithm is different depending on which style of wallet one is considering. In each case however, the generation is a function from an initial seed to an extended private key (XPrv) composed of:
- 64 bytes: an extended Ed25519 secret key composed of:
- 32 bytes: Ed25519 curve scalar from which few bits have been tweaked (see below)
- 32 bytes: Ed25519 binary blob used as IV for signing
- 32 bytes: chain code for allowing secure child key derivation
Additional resources
- SLIP 0010
- BIP 0032
- BIP 0039
- RFC 8032
- CIP 3 — "Wallet key generation"
- CIP 1852 — "HD (Hierarchy for Deterministic) Wallets for Cardano"
History
Throughout the years, Cardano has been using different styles of HD wallets. We categorize these wallets in the following terms:
Wallet Style | Compatible Products |
---|---|
Byron | Daedalus, Yoroi |
Icarus | Yoroi, Trezor |
Ledger | Ledger |
Each wallet is based on Ed25519 elliptic curves though differs in subtle ways highlighted in the next sections.
Pseudo-code
Byron
function generateMasterKey(seed) {
return hashRepeatedly(seed, 1);
}
function hashRepeatedly(key, i) {
(iL, iR) = HMAC
( hash=SHA512
, key=key
, message="Root Seed Chain " + UTF8NFKD(i)
);
let prv = tweakBits(SHA512(iL));
if (prv[31] & 0b0010_0000) {
return hashRepeatedly(key, i+1);
}
return (prv + iR);
}
function tweakBits(data) {
// * clear the lowest 3 bits
// * clear the highest bit
// * set the highest 2nd bit
data[0] &= 0b1111_1000;
data[31] &= 0b0111_1111;
data[31] |= 0b0100_0000;
return data;
}
Icarus
Icarus master key generation style supports setting an extra password as an arbitrary byte array of any size. This password acts as a second factor applied to cryptographic key retrieval. When the seed comes from an encoded recovery phrase, the password can therefore be used to add extra protection in case where the recovery phrase were to be exposed.
function generateMasterKey(seed, password) {
let data = PBKDF2
( kdf=HMAC-SHA512
, iter=4096
, salt=seed
, password=password
, outputLen=96
);
return tweakBits(data);
}
function tweakBits(data) {
// on the ed25519 scalar leftmost 32 bytes:
// * clear the lowest 3 bits
// * clear the highest bit
// * clear the 3rd highest bit
// * set the highest 2nd bit
data[0] &= 0b1111_1000;
data[31] &= 0b0001_1111;
data[31] |= 0b0100_0000;
return data;
}
More info
For a detailed analysis of the cryptographic choices and the above requirements, have a look at: Wallet Cryptography and Encoding
function generateMasterKey(seed, password) {
let data = PBKDF2
( kdf=HMAC-SHA512
, iter=2048
, salt="mnemonic" + UTF8NFKD(password)
, password=UTF8NFKD(spaceSeparated(toMnemonic(seed)))
, outputLen=64
);
let cc = HMAC
( hash=SHA256
, key="ed25519 seed"
, message=UTF8NFKD(1) + seed
);
let (iL, iR) = hashRepeatedly(data);
return (tweakBits(iL) + iR + cc);
}
function hashRepeatedly(message) {
let (iL, iR) = HMAC
( hash=SHA512
, key="ed25519 seed"
, message=message
);
if (iL[31] & 0b0010_0000) {
return hashRepeatedly(iL + iR);
}
return (iL, iR);
}
function tweakBits(data) {
// * clear the lowest 3 bits
// * clear the highest bit
// * set the highest 2nd bit
data[0] &= 0b1111_1000;
data[31] &= 0b0111_1111;
data[31] |= 0b0100_0000;
return data;
}
Notes about BIP-44
Abstract
This document gives a semi-technical overview of multi-account hierarchy for deterministic wallets, their trade-offs and limitations.
Overview
BIP-44 is a standard to defines a logical hierarchy for deterministic wallets. It is constructed on top of another standard called BIP-32 which describes how to create hierarchical deterministic (abbrev. HD) wallets. In a nutshell, a HD wallet is a set of public/private key pairs that all descend from a common root key pair. The process of generating a new key pair from a parent key pair is known as key derivation. BIP-32 offers an approach for defining such structure by showing how to derive child keys from a parent key, an index and an elliptic curve. Cardano differs from BIP-32 in its choice of elliptic curve (namely Curve25519) but the base principle remains the same.
On top of this derivation mechanism, BIP-44 defines a set of 5 standard levels (i.e. subsequent derivation indexes) with a precise semantic. In particular:
-
The first derivation index is called the purpose and is always set to
44'
to indicate that the derivation indexes must be interpreted according to BIP-44. -
The second derivation index is called the coin_type and is set depending on the underlying coin. There's a public list of registered coin types. Ada is registered as
1815'
. The idea of having this second level is to allow a wallet to use a single root key to manage assets of different kinds. -
The third derivation index is called the account and is used to separate the space in multiple user entities for enhanced organization.
-
The fourth derivation index is called the change which is meant for distinguish addresses that need to be treated as change from those treated as deposit addresses.
-
The fifth and last index is called the address, which is meant to increase sequentially to generate new addresses.
Each derivation level can contain up to 2^31 (2 147 483 648)
different values, which in total makes for a lot of possible combinations. In practice, the first two levels are always set to the same constants and the third and fourth index has a very limited range.
Address & Account discovery
Because it is not possible for a software to know after the facts what indexes were used to obtain a particular key pair, one needs a strategy for discovering addresses and knowing whether a given address belongs to a wallet. A naive approach would be to generate upfront all possible addresses for a given sub-path and to perform lookups in this table. Not only would this be extremely inefficient (it would take ~4 months to generate the entire space for a single sub-path on a decent hardware), it would also be very much ineffective (for each address on a blockchain, one would have to lookup in an index containing more than two billion entries).
Instead, BIP-44 specifies a few key points that each software implementing it should follow. In particular, BIP-44 introduces the concept of a gap limit, which is the maximum number of consecutive unused addresses the software must keep track of at any time.
Example
Let see how it works with an example in which we consider only a single derivation level (the last one) and for which the gap limit is set to 3
.
-
A new empty wallet would be allowed to generate up to 3 addresses with index
0
,1
and2
┌───┬───┬───┐ │ 0 │ 1 │ 2 │ └───┴───┴───┘
-
The wallet scans the blockchain looking for addresses which would have been generated from these indexes. An addresses is found using the index
i=2
.↓ ┌───┬───┬───┐ ┌───┬───┬───┬───┬───┬───┐ │ 0 │ 1 │ 2 │ ⇒ │ 0 │ 1 │ ✓ │ 3 │ 4 │ 5 │ └───┴───┴───┘ └───┴───┴───┴───┴───┴───┘
By discovering an address using
i=2
, the wallet needs to generate 3 new addresses at indexesi=3
,i=4
andi=5
so that it is always keeping track of 3 consecutive unused addresses. -
The wallet continues scanning the blockchain and finds an address at index
i=0
.↓ ┌───┬───┬───┬───┬───┬───┐ ┌───┬───┬───┬───┬───┬───┐ │ 0 │ 1 │ ✓ │ 3 │ 4 │ 5 │ ⇒ │ ✓ │ 1 │ ✓ │ 3 │ 4 │ 5 │ └───┴───┴───┴───┴───┴───┘ └───┴───┴───┴───┴───┴───┘
Because discovering the address at index
i=0
does not reduce the number of consecutive unused addresses, there's no need to generate new addresses, there are still 3 consecutive unused addresses available. -
The wallet continues scanning the blockchain and finds an address at index
i=3
↓ ┌───┬───┬───┬───┬───┬───┐ ┌───┬───┬───┬───┬───┬───┬───┐ │ ✓ │ 1 │ ✓ │ 3 │ 4 │ 5 │ ⇒ │ ✓ │ 1 │ ✓ │ ✓ │ 4 │ 5 │ 6 │ └───┴───┴───┴───┴───┴───┘ └───┴───┴───┴───┴───┴───┴───┘
Here, we need to generate only one new address in order to maintain a number of 3 consecutive unused addresses.
-
This goes on and on until there are no more addresses discovered on-chain with indexes we know of.
What if there's an address with index i=7
?
Limitations of BIP-44
The discovery algorithm above works only because wallet software enforces and satisfies two invariants which are stated in BIP-44 as such:
-
Software should prevent a creation of an account if a previous account does not have a transaction history (meaning none of its addresses have been used before).
-
Address gap limit is currently set to 20. If the software hits 20 unused addresses in a row, it expects there are no used addresses beyond this point and stops searching the address chain.
For sequential discovery to work, one needs to fix some upper boundaries. Without such boundaries, it'd be impossible for a wallet software to know when to stop generating addresses. Because each software that follows BIP-44 also abides by these two rules, it generally works fine. Yet, there are some annoying consequences stemming from it:
Limit 1 - Unsuitable for exchanges
Users like exchanges do typically need to generate a lot of unused addresses upfront that their users can use for deposit. BIP-44 is a standard that is well tailored for personal wallets, where the address space grows with the user needs. It is however a quite poor choice for exchanges who typically use a single set of root credentials for managing assets of thousands of users.
Possible mitigations:
- Exchanges could make use of a larger gap limit, violating the classic BIP-44 default but, still following the same principle otherwise. Despite being inefficient, it could work for exchanges with a limited number of users.
- Another wallet scheme that would be better suited for exchanges should probably be considered.
Address Derivation
HD Random wallets (Byron / Legacy)
Note
This scheme is an underspecified / ad-hoc scheme designed in the early eras of Cardano. It is intrinsically entangled to the address format and relies on the ability to embed extra pieces of information into addresses themselves in order to perform key derivation.
An initial key is created using a Ed25519 cryptographic elliptic curve from a seed (encoded in the form of mnemonic words). From this wallet Key, other keys can be derived. We therefore define a hierarchy of depth 2, where a single root key and derivation indexes defines a derivation path.
We typically represent that derivation path as follows:
m/account_ix'/address_ix'
where
m
refers to the root master key/
symbolizes a new derivation step using a particular derivation index.'
indicates that keys at this step are considered hardened keys (private key is required to derive new keys). Indexes of hardened keys follows a particular convention and belongs to the interval[2³¹, 2³²-1]
.
+--------------------------------------------------------------------------------+
| BIP-39 Encoded 128 bits Seed with CRC a.k.a 12 Mnemonic Words |
| |
| squirrel material silly twice direct ... razor become junk kingdom flee |
| |
+--------------------------------------------------------------------------------+
|
|
v
+--------------------------+ +-----------------------+
| Wallet Private Key |--->| Wallet Public Key | m
+--------------------------+ +-----------------------+
|
| account ix
v
+--------------------------+ +-----------------------+
| Account Private Key |--->| Account Public Key | m/account_ix'
+--------------------------+ +-----------------------+
|
| address ix
v
+--------------------------+ +-----------------------+
| Address Private Key |--->| Address Public Key | m/account_ix'/address_ix'
+--------------------------+ +-----------------------+
Fig 1. Byron HD Wallets Key Hierarchy
Note that wallets typically store keys in memory in an encrypted form, using an encryption passphrase. That passphrase prevents "free" key manipulation.
HD Sequential wallets (à la BIP-44)
BIP-0044 Multi-Account Hierarchy for Deterministic Wallets is a Bitcoin standard defining a structure and algorithm to build a hierarchy tree of keys from a single root private key. Note that this is the derivation scheme used by Icarus / Yoroi.
It is built upon BIP-0032 and is a direct application of BIP-0043. It defines a common representation of addresses as a multi-level tree of derivations:
m / purpose' / coin_type' / account_ix' / change_chain / address_ix
Cardano uses an extension / variation of BIP-44 described in CIP-1852.
+--------------------------------------------------------------------------------+
| BIP-39 Encoded Seed with CRC a.k.a Mnemonic Words |
| |
| squirrel material silly twice direct ... razor become junk kingdom flee |
| |
+--------------------------------------------------------------------------------+
|
|
v
+--------------------------+ +-----------------------+
| Wallet Private Key |--->| Wallet Public Key |
+--------------------------+ +-----------------------+
|
| purpose (e.g. 1852')
|
v
+--------------------------+
| Purpose Private Key |
+--------------------------+
|
| coin type (e.g. 1815' for ADA)
v
+--------------------------+
| Coin Type Private Key |
+--------------------------+
|
| account ix (e.g. 0')
v
+--------------------------+ +-----------------------+
| Account Private Key |--->| Account Public Key |
+--------------------------+ +-----------------------+
| |
| chain (e.g. 1 for change) |
v v
+--------------------------+ +-----------------------+
| Change Private Key |--->| Change Public Key |
+--------------------------+ +-----------------------+
| |
| address ix (e.g. 0) |
v v
+--------------------------+ +-----------------------+
| Address Private Key |--->| Address Public Key |
+--------------------------+ +-----------------------+
Fig 2. BIP-44 Wallets Key Hierarchy
Paths with a tilde '
refer to BIP-0032 hardened derivation path (meaning that
the private key is required to derive children). This leads to a couple of
interesting properties:
- New addresses (change or not) can be generated from an account's public key alone.
- The derivation of addresses can be done sequentially / deterministically.
- If an account private key is compromised, it doesn't compromise other accounts.
This allows for external key-stores and off-loading of key derivation to some external source such that a wallet could be tracking a set of accounts without the need for knowing private keys. This approach is discussed more in details below.
NOTE: One other important aspect is more of a security-concern. The introduction of such new address scheme makes it possible to change the underlying derivation function to a new better one with stronger cryptographic properties. This is rather ad-hoc to the structural considerations above, but is still a major motivation.
Differences between Cardano HD Sequential and BIP-44
In BIP-44, new derivation paths are obtained by computing points on an elliptic curve where curve parameters are defined by secp256k1. Cardano's implementation relies on ed25519 for it provides better properties in terms of security and performances.
Also, we use purpose = 1852'
to clearly distinguish these formats from the original BIP-44 specification. Note however that Yoroi/Icarus in the Byron era are using purpose = 44'
.
Differences between HD Random and HD Sequential Wallets
Because BIP-44 public keys (HD Sequential) are generated in sequence, there's no need to maintain a derivation path in the address attributes (incidentally, this also makes addresses more private). Instead, we can generate pools of addresses up to a certain limit (called address gap) for known accounts and look for those addresses during block application.
We end up with two kind of Cardano addresses:
Address V1 (Hd Random) | Address V2 (Icarus, HD Sequential) |
---|---|
Uses ed25519@V1 (buggy) curve impl for derivation | Uses ed25519@V2 curve impl for derivation |
Has a derivation path attribute | Has no derivation path attribute |
New address indexes are random | New address indexes are sequential |
Need root private key and passphrase to create addresses | Need only parent account public key to create addresses |
Root keys are obtained from 12-word mnemonic phrases | Root keys are obtained from mnemonic phrases of various length |
Although the idea behind the BIP-44 protocol is to be able to have the wallet working in a mode where the wallet doesn't know about the private keys, we still do want to preserve compatibility with the existing address scheme (which can't work without knowing private keys).
This leaves us with three operational modes for a wallet:
-
Compatibility Mode with private key: In this mode, addresses are derived using the wallet root private key and the classic derivation scheme. New address indexes are generated randomly.
-
Deterministic Mode without private key: Here, we review the definition of a wallet down to a list of account public keys with no relationship whatsoever from the wallet's point of view. New addresses can be derived for each account at will and discovered using the address pool discovery algorithm described in BIP-44. Public keys are managed and provided from an external sources.
-
Deterministic Mode with private key: This is a special case of the above. In this mode, the wallet maintain the key hierarchy itself, and leverage the previous mode for block application and restoration.
Those operational modes are detailed in more depth here below. Note that we'll call external wallets wallets who don't own their private key.
Byron Address Format
Internal Structure
+-------------------------------------------------------------------------------+
| |
| CBOR-Serialized Object with CRC¹ |
| |
+-------------------------------------------------------------------------------+
|
|
v
+-------------------------------------------------------------------------------+
| Address Root | Address Attributes | AddrType |
| | | |
| Hash (224 bits) | Der. Path² + Stake + NM | PubKey | (Script) | Redeem |
| | (open for extension) | (open for extension) |
+-------------------------------------------------------------------------------+
| |
| | +----------------------------------+
v | | Derivation Path |
+---------------------------+ |---->| |
| SHA3-256 | | | ChaChaPoly⁴ AccountIx/AddressIx |
| |> Blake2b 224 | | +----------------------------------+
| |> CBOR | |
| | |
| -AddrType | | +----------------------------------+
| -ASD³ (~AddrType+PubKey) | | | Stake Distribution |
| -Address Attributes | | | |
+---------------------------+ |---->| BootstrapEra | (Single | Multi) |
| +----------------------------------+
|
|
| +----------------------------------+
| | Network Magic |
|---->| |
| Addr Discr: MainNet vs TestNet |
+----------------------------------+
- CRC: Cyclic Redundancy Check; sort of checksum, a bit (pun intended) more reliable.
- ASD: Address Spending Data; Some data that are bound to an address. It's
an extensible object with payload which identifies one of the three elements:
- A Public Key (Payload is thereby a PublicKey)
- A Script (Payload is thereby a script and its version)
- A Redeem Key (Payload is thereby a RedeemPublicKey)
- Derivation Path: Note that there's no derivation path for Redeem nor Scripts addresses!
- ChaChaPoly: Authenticated Encryption with Associated Data (see RFC 7539. We use it as a way to cipher the derivation path using a passphrase (the root public key).
Example 1: Yoroi Address - Byron Mainnet
Let's take an arbitrary Yoroi base58-encoded address of the Byron mainNet:
Ae2tdPwUPEZFRbyhz3cpfC2CumGzNkFBN2L42rcUc2yjQpEkxDbkPodpMAi
Now, this address could be represented as a raw bytestring by decoding from base58:
0X82 0XD8 0X18 0X58 0X21 0X83 0X58 0X1C 0XBA 0X97 0X0A 0XD3 0X66 0X54
0XD8 0XDD 0X8F 0X74 0X27 0X4B 0X73 0X34 0X52 0XDD 0XEA 0XB9 0XA6 0X2A
0X39 0X77 0X46 0XBE 0X3C 0X42 0XCC 0XDD 0XA0 0X00 0X1A 0X90 0X26 0XDA
0X5B
In this representation, bytes are in a structured format called CBOR. Some bytes are actually tags which carry a particular semantic, and some are values. We can re-shuffle the bytes as follows to make things a bit clearer:
82 # array (2)
D8 18 # tag (24) [CBOR Metadata]
58 21 (8358...A000) # bytes (33) [Address Payload]
1A 9026DA5B # unsigned(2418465371) [CRC]
So, a Byron address is basically formed of two top-level elements:
- A tagged bytestring;
24
means that the bytes represent another CBOR-encoded structure. - A CRC of the inner tagged bytestring
Now, if we also interpret the inner bytestring as a CBOR structure, we obtain:
83 # array(3)
58 1C (BA97...CCDD) # bytes(28) [Address Root]
A0 # map(0) [Address Attributes]
00 # unsigned(0) [Address Type]
An address type of 0
refers to a spending address for which the address root
contains a hash of a public spending key. This address payload has no attribute
for the initial address was a Yoroi's address on MainNet which follows a BIP-44
derivation scheme and therefore, does not require any attributes.
Example 2: Daedalus Address - Byron TestNet
Let's take an arbitrary Daedalus base58-encoded address of a Byron testNet:
37btjrVyb4KEB2STADSsj3MYSAdj52X5FrFWpw2r7Wmj2GDzXjFRsHWuZqrw7zSkwopv8Ci3VWeg6bisU9dgJxW5hb2MZYeduNKbQJrqz3zVBsu9nT
Now, this address could be represented as a raw bytestring by decoding from base58:
0X82 0XD8 0X18 0X58 0X49 0X83 0X58 0X1C 0X9C 0X70 0X85 0X38 0XA7 0X63 0XFF 0X27
0X16 0X99 0X87 0XA4 0X89 0XE3 0X50 0X57 0XEF 0X3C 0XD3 0X77 0X8C 0X05 0XE9 0X6F
0X7B 0XA9 0X45 0X0E 0XA2 0X01 0X58 0X1E 0X58 0X1C 0X9C 0X17 0X22 0XF7 0XE4 0X46
0X68 0X92 0X56 0XE1 0XA3 0X02 0X60 0XF3 0X51 0X0D 0X55 0X8D 0X99 0XD0 0XC3 0X91
0XF2 0XBA 0X89 0XCB 0X69 0X77 0X02 0X45 0X1A 0X41 0X70 0XCB 0X17 0X00 0X1A 0X69
0X79 0X12 0X6C
In this representation, bytes are in a structured format called CBOR. Some bytes are actually tags which carry a particular semantic, and some are values. We can re-shuffle the bytes as follows to make things a bit clearer:
82 # array(2)
D8 18 # tag(24) [CBOR Metadata]
58 49 (8358...1700) # bytes(73) [Address Payload]
1A 6979126C # unsigned(1769542252) [CRC]
So, a Byron address is basically formed of two top-level elements:
- A tagged bytestring;
24
means that the bytes represent another CBOR-encoded structure. - A CRC of the inner tagged bytestring
Now, if we also interpret the inner bytestring as a CBOR structure, we obtain:
83 # array(3)
58 1C (9C70...450E) # bytes(28) [Address Root]
A2 # map(2) [Address Attributes]
01 # unsigned(1) [Derivation Path Attribute]
58 1E (581C...6977) # bytes(30) [Derivation Path Value]
02 # unsigned(2) [Network Magic Attribute]
45 (1A4170CB17) # bytes(5) [Network Magic Value]
00 # unsigned(0) [Address Type]
An address type of 0
refers to a spending address for which the address root
contains a hash of a public spending key. In addition, we can see that this
address has 2 attributes identified by two tags 01
for the derivation path,
and 02
for the network magic. The derivation path is an encrypted bytestring
which holds two derivation indexes for the account and address paths.
Coin Selection
Reminder on UTxO
Cardano is a crypto-currency that is UTxO-based. UTxO stands here for "Unspent Transaction Output". In essence, UTxOs are very similar to bank notes and we treat them as such. Hence, it is not possible to spend a single UTxO in more than one transaction, and, in our current implementation, it's not possible to split a single UTxO into multiple recipients of a transaction. Note that, we also use the term coin when referring to UTxO.
Contrary to a classic accounting model, there's no such thing as spending part of a UTXO, and one has to wait for a transaction to be included in a block before spending the remaining change. Similarly, one can't spend a $20 bill at two different shops at the same time, even if it is enough to cover both purchases — one has to wait for change from the first transaction before making the second one. Having many available coins allow for greater concurrency capacity. The more coins are available, the more transactions can be made at the same time.
What is Coin Selection
For every transaction, the wallet backend performs a coin selection. There are many approaches to the problem and, many solutions. Moreover, there are a few problematics we have to deal with when performing coin selection:
-
A transaction has a limited size defined by the protocol. Adding inputs or outputs to a transaction increases its size. Therefore, there's a practical maximum number of coins that can be selected.
-
As soon as coins from a given address are spent, that address is exposed to the public. From this follows privacy and security issues known about address re-use.
-
In order to maintain good privacy, change outputs shouldn't be much discernible from the actual outputs.
-
Because of the first point, a wallet needs to make sure it doesn't needlessly fragment available UTxOs by creating many small change outputs. Otherwise, in the long run, the wallet becomes unusable.
-
Coin selection needs to remain fairly efficient to minimize fees as much as possible.
In Cardano, the coin selection works mainly in two steps:
- Coins are selected randomly to cover a given amount, generating a change output that is nearly as big as the original output
- The inputs and outputs are slightly adjusted to cover for fees
Note that in case the random coin selection fails (because we couldn't reach the target amount without exceeding the transaction max size), we fallback to selecting UTxO from the largest first. If this fails again, it means that the UTxOs of the wallet is too fragmented and smaller transactions have to be sent. In practice, this shouldn't happen much as the wallet tries to get rid of the dust (by selecting randomly, if one has many small UTxOs, a random selection has bigger chances to contain many small UTxOs as well).
Multi-Output Transactions
Cardano only allows a given UTXO to cover at most one single transaction output. As a result, when the number of transaction outputs is greater than the number of available UTXOs, the API returns a 'UTXONotEnoughFragmented' error.
To make sure the source account has a sufficient level of UTXO fragmentation (i.e. number of UTXOs), the state of the UTXOs can be monitored via following wallet endpoints:
The number of wallet UTXOs should be no less than the transaction outputs, and the sum of all UTXOs should be enough to cover the total transaction amount, including fees.
Annexes
Hierarchical Deterministic (HD) Wallets
In Cardano, hierarchical deterministic (HD) wallets are similar to those described in BIP-0032.
Deterministic wallets and elliptic curve mathematics permit schemes where one can calculate a wallet public keys without revealing its private keys. This permits for example a webshop business to let its webserver generate fresh addresses (public key hashes) for each order or for each customer, without giving the webserver access to the corresponding private keys (which are required for spending the received funds). However, deterministic wallets typically consist of a single "chain" of keypairs. The fact that there is only one chain means that sharing a wallet happens on an all-or-nothing basis.
However, in some cases one only wants some (public) keys to be shared and recoverable. In the example of a webshop, the webserver does not need access to all public keys of the merchant's wallet; only to those addresses which are used to receive customer's payments, and not for example the change addresses that are generated when the merchant spends money. Hierarchical deterministic wallets allow such selective sharing by supporting multiple keypair chains, derived from a single root.
Notation
Conceptually, HD derivation can be seen as a tree with many branches, where keys live at each node and leaf such that an entire sub-tree can be recovered from only a parent key (and seemingly, the whole tree can be recovered from the root master key).
BIP32-Ed25519: Hierarchical Deterministic Keys over a Non-linear Keyspace. For deriving new keys from parent keys, we use the same approach as defined in
We note \(CKD_{priv}\) the derivation of a private child key from a parent private key such that:
$$ CKD_{prv}((k^P, c^P), i) → (k_i, c_i) $$
We note \(CKD_{pub}\) the derivation of a public child key from a parent public key such that:
$$ i < 2^{31}: CKD_{pub}((A^P, c^P), i) → (A_i, c_i) $$
Note
This is only possible for so-called "soft" derivation indexes, smaller than \(2^{31}\).
We note \(N\) the public key corresponding to a private key such that:
$$ N(k, c) → (A, c) $$
To shorten notation, we will borrow the same notation as described in BIP-0032 and write
\(CKD_{priv}(CKD_{priv}(CKD_{priv}(m,3H),2),5)\) as m/3H/2/5
.
Equivalently for public keys, we write
\(CKD_{pub}(CKD_{pub}(CKD_{pub}(M,3),2),5)\) as M/3/2/5
.
Path Levels
Cardano wallet defines the following path levels:
$$ m / purpose_H / coin_type_H / account_H / account\_type / address\_index $$
- \(purpose_H = 1852_H\)
- \(coin\_type_H = 1815_H\)
- \(account_H = 0_H\)
- \(account\_type \) is either:
0
to indicate an address on the external chain, that is, an address that is meant to be public and communicated to other users.1
to indicate an address on the internal chain, that is, an address that is meant for change, generated by a wallet software.2
to indicate a reward account address, used for delegation.
- \(address\_index\) is either:
0
if the \(account\_type\) is2
- Anything between \(0\) and \(2^{31}\) otherwise
In the Byron era, sequential wallets used in Yoroi (a.k.a Icarus wallets) have been using $$purpose = 44_H$$ according to standard BIP-44 wallets. The Shelley era introduces however an extension to BIP-44, and therefore, uses a different $purpose$ number.
Account Discovery
What follows is taken from the "Account Discovery" section from BIP-0044
When the master seed is imported from an external source the software should start to discover the accounts in the following manner:
- derive the first account's node (index = 0)
- derive the external chain node of this account
- scan addresses of the external chain; respect the gap limit described below
- if no transactions are found on the external chain, stop discovery
- if there are some transactions, increase the account index and go to step 1
For the algorithm to be successful, software should disallow creation of new accounts if previous one has no transaction history.
Please note that the algorithm works with the transaction history, not account balances, so you can have an account with 0 total coins and the algorithm will still continue with discovery.
Address gap limit
Address gap limit is currently set to 20. If the software hits 20 unused addresses in a row, it expects there are no used addresses beyond this point and stops searching the address chain. We scan just the external chains, because internal chains receive only coins that come from the associated external chains.
Wallet software should warn when the user is trying to exceed the gap limit on an external chain by generating a new address.
Transaction Lifecycle
This needs to be updated to include transaction resubmission by the wallet.
States
stateDiagram [*] --> pending: request [*] --> in_ledger: discover pending --> in_ledger: discover pending --> [*]: forget in_ledger --> pending: rollback
State transition: forget
Importantly, a transaction, when sent, cannot be cancelled. One can only request forgetting about it in order to try spending (concurrently) the same UTxO in another transaction. But, the transaction may still show up later in a block and therefore, appear in the wallet.
State transition: discover
Discovering a transaction happens regardless of a transaction being present
or not as pending
. Actually, only outgoing transactions are going through
the pending
state. Incoming ones or, outgoing ones that have been forgotten
may be discovered directly in blocks.
Submission
sequenceDiagram participant Wallet Client participant Wallet Server participant Network Wallet Client ->>+ Wallet Server: POST payment request Wallet Server ->> Wallet Server: Select available coins Wallet Server ->> Wallet Server: Construct transaction Wallet Server ->> Wallet Server: Sign transaction Wallet Server -->> Wallet Client: 403 Forbidden Wallet Server ->>+ Network: Submit transaction Network ->> Network: Validate transaction structure Network -->> Wallet Server: (ERR) Malformed transaction Wallet Server -->> Wallet Client: 500 Internal Server Error Network ->>- Wallet Server: Accepted Wallet Server ->>- Wallet Client: 202 Accepted Network ->> Network: Broadcast transaction to peers loop Every block Network ->> Network: Insert or discard transaction(s) Network ->> Wallet Server: Yield new block Wallet Server ->> Wallet Server: Discover transaction(s) end
Unspent Transaction Outputs (UTxO)
In a UTxO-based blockchain, a transaction is a binding between inputs and outputs.
input #1 >---* *---> output #1
\ /
input #2 >---*--------*
/ \
input #3 >---* *---> output #2
In a standard payment, outputs are a combination of:
- A value
- A reference (a.k.a address, a "proof" of ownership telling who owns the output).
input #1 >---* *---> (123, DdzFFzCqr...)
\ /
input #2 >---*--------*
/ \
input #3 >---* *---> (456, hswdEoQCp...)
About address encoding
We usually represent addresses as encoded text strings. An address has a structure and a binary representation that is defined by the underlying blockchain. Yet, since they are often used in user-facing interfaces, addresses are usually encoded in a human-friendly format to be easily shared between users.
An address does not uniquely identify an output. As a matter of fact, multiple transactions could send funds to a same output address! We can however uniquely identify an output by:
- Its host transaction id
- Its index within that transaction
This combination is also called an input. Said differently, inputs are outputs of previous transactions.
*---------------- tx#42 ----------------------*
| |
(tx#14, ix#2) >-----------------* *--> (123, DdzFFqr...)--- (tx#42, ix#0)
| \ / |
(tx#41, ix#0) >-----------------*-----* |
| / \ |
(tx#04, ix#0) >----------------* *--> (456, hswdQCp...)--- (tx#42, ix#1)
| |
*---------------------------------------------*
Therefore, new transactions spend outputs of previous transactions, and produce new outputs that can be consumed by future transactions. An unspent transaction output (i.e. not used as an input of any transaction) is called a UTxO (as in Unspent Tx Output) and represents an amount of money owned by a participant.
FAQ
Where does the money come from? How do I make the first transaction?
When bootstrapping a blockchain, some initial funds can be distributed among an initial set of stakeholders. This is usually the result of an Initial Coin Offering or, an agreement between multiple parties. In practice it means that, the genesis block of a blockchain may already contain some UTxOs belonging to various stakeholders.
Beside, core nodes running the protocol and producing blocks are allowed to insert in every block minted (resp. mined) called a coinbase transaction. This transaction has no inputs but follows specific rules fixed by the protocol and is used as an incentive to encourage participants to engage in the protocol.
What is the difference between an address and a public key?
In a very simple system that would only support payment transactions, public key could be substituted for addresses. In practice, addresses are meant to hold some extra pieces of information that are useful for other aspects of the protocol. For instance, in Cardano in the Shelley era, addresses may also contain:
-
A network discriminant tag, to distinguish addresses between a testNet and the MainNet and avoid unfortunate mistakes.
-
A stake reference to take part in delegation.
Addresses may also be used to trigger smart contracts, in which case, they'll refer to a particular script rather than a public key.
In a nutshell, a public key is a piece of information that enables a stakeholder to prove one owns a particular UTxO. Whereas an address is a data-structure which contain various pieces of information, for example, a (reference to a) public key.
What are Cardano addresses made of?
See:
Specifications
Precise specifications for
- Parts of the wallet API
- Data structures used in the wallet
Additional information:
Wallet Identifiers (WalletId)
The WalletId
is a hexadecimal string derived from the wallet's mnemonic.
It is used by the cardano-wallet
server to refer to specific wallets.
For all wallet types, the WalletId
is a blake2b-160 hash of
something. This hash function produces a 20-byte digest, which becomes
40 hexadecimal digits.
Shelley Wallets
The WalletId
is calculated the same way for shared (multi-sig) and non-shared Shelley wallets.
Therefore, each signer in a shared wallet will have a unique WalletId
, because they have a different mnemonic.
Full Wallets (the default)
The extended private key is derived from the mnemonic, and then its public key is hashed to produce the WalletId
.
$$WalletId = \mathrm{base16}(\mathrm{blake2b_{160}}(\mathrm{xpub}(rootXPrv)))$$
Single Account Wallets
These are wallets for which only an account-level XPrv or XPub is known.
$$WalletId = \mathrm{base16}(\mathrm{blake2b_{160}}(accountXPub))$$
Byron Wallets
The WalletId
comes from the extended public key of the wallet root key.
$$WalletId = \mathrm{base16}(\mathrm{blake2b_{160}}(\mathrm{xpub}(rootXPrv)))$$
Example code
This shell script will produce a Shelley WalletId
from the mnemonic words provided on stdin.
#!/usr/bin/env bash
xargs \
| cardano-address key from-recovery-phrase Shelley \
| cardano-address key public --with-chain-code \
| bech32 \
| xxd -r -p \
| b2sum -b -l 160 \
| cut -d' ' -f1
Prototypes
Specification: Light mode for cardano-wallet
11th Feb 2022
Status: DRAFT
Synopsis
This document specifies a light-mode for cardano-wallet
.
This mode aims to make synchronisation to the blockchain faster by trusting an off-chain source of aggregated blockchain data.
Light wallets often employ this strategy of using a less trusted data source; for example the Nami wallet uses Blockfrost as data source. The purpose of the light-mode of cardano-wallet
is to make this "trust vs speed" trade-off readily available to downstream software such as Daedalus with minimal changes to its downstream API.
In addition, the "trust versus speed" trade-off will be partially obsoleted by Mithril technology, which aims to give us "trust and speed" by providing verified ledger state data as opposed to block data. The act of implementing light-mode in cardano-wallet
does not only offer immediate benefits, but will, in fact, partially prepare the codebase for Mithril technology.
Motivation
Background
As the Cardano blockchain grows in size, retrieving and verifying blocks from the consensus network becomes increasingly time consuming. By making a "trust versus speed" trade-off, we can significantly decrease synchronization times.
User-visible benefits
- Allow users to operate their wallet without needing to wait for the node to download and validate the entire blockchain.
- Allow users to run cardano-wallet and Daedalus on systems with significantly less than 8GB of memory and less than 10GB of disk space.
- Allow users to control where they sit on the trust vs convenience spectrum, depending upon their current circumstances.
Technical benefits
- Speed. With light-mode, we expect synchronisation times of < 1 minute for a wallet with 1'000 transactions. In contrast, synchronisation of an empty wallet currently takes ~ 50 minutes by connecting to a node — assuming that the node has already synced to the chain tip and built its ledger state, which itself takes hours.
- Compatibility. Light-mode is intended to preserve the API presented to downstream software such as Daedalus, only minimal changes are expected.
- Optionality. A wallet that was started in light-mode can be restarted in full mode without resynchronization, and vice versa. (MVP: no support yet for changing the mode dynamically while the wallet is running.)
Limitations
- Trust. The external data source does not provide the same level of protection against omission of transactions as the Proof-of-Stake consensus protocol.
- Privacy. In addition to the reduction of trust in the blockchain data, we now also reveal private information about addresses belonging to a single wallet.
- Address schemes. Only Shelley wallets with sequential address discovery can be supported.
However, these limitations are shared by all existing light wallets. In fact, some light wallets only provide single-address wallets, which is an additional reduction of privacy.
Specification
Overview
The implementation of light-mode is based on an efficient query Address -> Transactions
which the blockchain data source provides. This query is useful to wallets that use sequential address discovery (Shelley wallets). These wallets work with a sequence of potential addresses addr_0
, addr_1
, addr_2
, …. For each integer n
, there is a deterministic procedure for generating the corresponding address addr_n
. The wallet generates the first few addresses in this sequence and uses the above query to retrieve the corresponding transactions. When no transactions are found for g
("address gap") many consecutive addresses, the procedure stops — as the wallet never puts addresses with higher indices on the chain. In other words, this iterative loop yields all transactions that belong to the wallet.
This procedure can be visualized in a flow chart:
flowchart TB start([begin]) --> init[j := 0\ng := 0\ngap := 20] init --> query[query addr_j] query --> tx{transactions?} tx -- no --> gapAdd[g := g+1] gapAdd --> g{g > gap?} tx -- yes --> gapReset[g := 0] gapReset --> add[j := j+1] g -- yes ----> e([end]) g -- no --> add add --> query
This procedure is implemented and demonstrated in the light-mode-test
prototype.
Functionality
Network topology
In full-node mode, cardano-wallet
connects to a local cardano-node
:
flowchart TB subgraph local system cardano-launcher -. spawns .-> cardano-wallet cardano-launcher -. spawns .-> cardano-node cardano-wallet -- ChainSync --> cardano-node cardano-wallet -- LocalTxSubmission --> cardano-node cardano-wallet -- LocalStateQuery --> cardano-node end cardano-node -- syncs --> blockchain subgraph internet blockchain end
In light-mode, cardano-wallet
instead connects to the data source (e.g. cardano-graphql
or Blockfrost) and a transaction submission service through the internet
flowchart TB cardano-launcher -. spawns .-> cardano-wallet cardano-wallet -- query\nAddressTransactions --> cardano-graphql cardano-wallet -- TxSubmission --> cardano-submit-api subgraph local system cardano-launcher cardano-wallet end subgraph internet cardano-graphql --> n1[cardano-node] cardano-submit-api --> n2[cardano-node] n1 -- syncs --> blockchain n2 -- syncs --> blockchain end
Command line
The cardano-wallet
executable will feature a new flag --light
which indicates that the executable is to be started in light-mode:
$ cardano-wallet serve --light CRED
CRED
specifies how to connect to the less trusted blockchain data source. (MVP: Use Blockfrost;CRED
is a filepath to the secret token.)- The
--light
argument and the--node-socket
arguments are mutually exclusive with each other.
REST API
When the executable was started in light-mode:
- The endpoints in the hierarchy
/v2/byron-wallets/*
MUST return an error. Because byron wallets use random derivation indices to generate addresses, they are significantly more challenging to make a light wallet for. - (MVP: The endpoints in the hierarchy
/v2/shared-wallets/*
MAY return an error.) - The
GET /v2/network/information
endpoint MAY not returnsync_progress
as a percentage quantity from [0..100].
Internal
See also the light-mode-test
prototype!
-
Collect required queries on data source in a data type
LightLayer
. In this way, we can replace the data source with a mock data source and run property tests on the chain following logic. (MVP: Use Blockfrost for demonstration purposes.) -
Provide second implementation of
NetworkLayer
using aLightLayer
.- See prototype for details! Important changes:
watchTip
requires polling (2 seconds interval)currentNodeEra
may require hard-coding known eras.performanceEstimate
requires copying from ledger code.syncProgress
should be ignored for MVP.- The node-mode
NetworkLayer
does a lot of caching, we should avoid that for now and instead add a more uniform caching system later through a functionaddCaches :: NetworkLayer -> IO NetworkLayer
. postTx
submits a transaction to a web service instead of using theLocalTxSubmission
protocol with cardano-node. This web service can be different from the blockchain data source. An example of such a service iscardano-submit-api
.
-
New data type
data BlockSummary m = BlockSummary { from :: ChainPoint , to :: ChainPoint , query :: Address -> m [Transaction] }
-
consumed by
applyBlocks
. AdaptdiscoverTransactions
from the prototype. -
produced by
NetworkLayer
. AdaptlightSync
from the prototype. -
Idea: Use an additional GADT wrapper to specialize
applyBlocks
to sequential address states:data BlockData m s where List :: NonEmpty Block -> BlockData m s Summary :: BlockSummary m -> BlockData m (SeqState n k)
By using this type as argument to
applyBlocks
, we are guaranteed thatBlockSummary
is only used in conjunction with sequential state. Caveat: It would be better if we could avoid parametrizing theNetworkLayer
type with the address discovery states
.
-
Quality assurance
Benchmarks
- Compare restoration times for an empty wallet in node-mode and in light-mode. The existing nightly wallet restoration benchmark can be updated for this purpose.
- Number of requests that we make to the data source needs to be monitored / kept in check. The number of requests should scale linearly with the number of transactions and the number of addresses belonging to the wallet. If the data source supports aggregated queries, we can make a single request with a larger body.
Testing
- Mock implementation of
LightLayer
.- We probably won't get good value / work out of implementing a
LightLayer
that interacts with the local cluster. One could look into implementing the light-modeNetworkLayer
in terms of the node-modeNetworkLayer
, though.
- We probably won't get good value / work out of implementing a
- Property tests for
applyBlocks
usingBlockSummary
- Implement a conversion function
summarize :: NonEmpty Block -> BlockSummary Identity
- Create a list of blocks, apply
applyBlocks
to both the list directly, and to the result ofsummarize
— the results should be equal.
- Implement a conversion function
External
-
Cardano-launcher will have to be modified to not launch the
cardano-node
process when the cardano-wallet is launched in light-mode. -
Provision of the data source and of the transaction submission service is outside the scope of light-mode; we assume these services will be operated and paid for by someone. (Any light wallet assumes this.) In the MVP, we use Blockfrost for demonstration purposes.
Design alternatives
Byron wallets
Byron wallets cannot be supported by the light-mode as described above, as it is based on an efficient query Address -> Transactions
which Byron wallets cannot take advantage of. For these wallets, alternative designs would have to be explored. Possibilities include:
- A third-party byron address discovery service which takes a byron root XPub and sends back a list of addresses belonging to that wallet. However, that involve an additional reduction in trust.
- Existing Byron wallets can be partially supported in light-mode as long as the database of addresses generated/discovered so far is intact, as we can still retrieve the balance and all transactions. However, in light-mode, these wallets cannot be restored and cannot receive funds from newly generated addresses. Conversion to Shelly wallets is recommended.
Contributor Manual
Information about how to contribute code to the wallet.
This includes:
-
Information about our code ("what"). This includes information about
- coding style,
- libraries, and
- tools
that we use, and which are not specific to our problem domain.
-
Information about our processes ("how"), such as how we
- do continuous integration, or
- make releases.
-
Notes about various topics that might be useful.
What – Code and Languages
Building
Prerequisites
cardano-wallet
needs nix
to build.
Supported version | Dependency? | |
---|---|---|
[Nix][] | >= 2.5.1 | Required |
Follow the instructions on the [Nix][] page to install and configure Nix.
Make sure that you have set up the binary cache for Haskell.nix, [according to the instructions][haskell-nix-cache], or you will wait a long time building multiple copies of GHC. If Nix builds GHC, this is an indication that the cache has not been set up correctly.
Commands
The resulting executable will appear at ./result/bin/cardano-wallet
.
Unless you have local changes in your git repo, Nix will download the build from a nix cache rather than building locally. At the moment the cache is available only for developers at CF.
You may also run the executable directly with:
> nix run . -- <cardano wallet arguments>
or more comfortably, for pre-configured networks (mainnet
, testnet
, ...):
> CARDANO_NODE_SOCKET_PATH=../cardano-node/node.socket
> nix run .#mainnet/wallet -- <optional additional cardano wallet arguments>
You can run the integration tests with:
> nix build -L .#ci.x86_64-linux.tests.run.integration
Cross-compiling with Nix
To build the wallet for Windows, from Linux:
> nix build .#ci.artifacts.win64.release
Building straight from GitHub
The following command will build the master
branch, with the resulting executable appearing at ./result/bin/cardano-wallet
. To build another branch, add /<branch name, tag, or commit hash>
(see Nix Manual: Flake References for syntax). As before, if the target ref has already been built by Hydra, then it will be fetched from cache rather than built locally.
> nix build github:cardano-foundation/cardano-wallet
> ./result/bin/cardano-wallet version
v2022-01-18 (git revision: ce772ff33623e2a522dcdc15b1d360815ac1336a)
Cabal+Nix build
Use the Cabal+Nix build if you want to develop with incremental builds, but also have it automatically download cached builds of all dependencies.
If you run nix develop
, it will start a
development environment
for cardano-wallet
. This will contain:
cabal-install
and a GHC configured with a package database containing all Haskell package dependencies;- system library dependencies;
- a Hoogle index and
hoogle
command for searching documentation; - development tools such as
haskell-language-server
,hlint
,stylish-haskell
, andweeder
; - the
sqlite3
command; - the Shelley node backend
cardano-node
andcardano-cli
; and - other Adrestia utility programs such as
cardano-address
andbech32
Inside this shell you can use cabal build
and ghci
for development.
For example, you might start an incremental build of the integration test suite with:
ghcid -c "cabal repl test:integration"
and run the test suite with:
cabal run test:integration
Profiling build with cached dependencies
Use nix develop .#profiled
to get a shell where Haskell
dependencies are built with profiling enabled. You won't need to
rebuild all of the dependencies because they can be downloaded from
the Hydra cache.
> nix develop .#profiled
[nix-shell:~/iohk/cardano-wallet]$ cabal build \
--enable-tests --enable-benchmarks \
--enable-profiling \
all
Haskell-Language-Server
The haskell-language-server provides an IDE for developers with some typical features:
- Jump to definition.
- Find references.
- Documentation on hover.
- etc.
Prerequisites
The following must be installed:
We do not require a special version per-project so these executables can be installed system-wide.
Additionally, the following tools are provided by the cardano-wallet nix development shell:
We require a particular version of each per-project, so it's recommended to use the nix development environment to ensure you have the correct version.
In these instructions we enter a nix development environment using direnv allow
rather than nix develop
or nix-shell
(see Editor Setup).
Setup
haskell-language-server requires some priming to work with cardano-wallet:
# Symlink hie.yaml to hie-direnv.yaml, which is the project configuration for haskell-language-server
ln -sf hie-direnv.yaml hie.yaml
# Build and cache the nix development environment
direnv allow
# Generate a build plan
cabal configure --enable-tests --enable-benchmarks -O0
# Build entire project
cabal build all
This will prime haskell-language-server to work with all modules of the project (tests, benchmarks, etc.) and be fully featured. Without these steps, haskell-language-server may fail to:
- Find auto-generated modules (such as Paths_* modules).
- Navigate across projects (jump-to-definition).
- Provide documentation on hover.
Testing
To test the haskell-language-server, use the following commands (these should be in your $PATH because you executed direnv allow
previously, or have entered a nix development environment):
hie-bios check lib/wallet/src/Cardano/Wallet.hs
haskell-language-server lib/wallet/exe/cardano-wallet.hs
Occasionally hie-bios
will fail with a Segmentation Fault
. In these cases just run hie-bios
again.
Note that these commands will only test a couple of files. To test the whole project, see Troubleshooting.
Editor Setup
With a working installation of haskell-language-server, we can integrate with our IDE of choice. See Configuring Your Editor.
IMPORTANT: you need to ensure that your editor invokes the same version of haskell-language-server that we have configured above. A simple way to do that is to launch your editor from within a nix development environment (e.g. nix develop --command 'vim'
), or, more practically, to configure your editor with direnv
support. Here are some examples:
Troubleshooting
Helpful resources:
- Configuring haskell-language-server
- hie-bios BIOS Configuration
- Troubleshooting haskell-language-server
The Testing commands only tested a subset of the files in the project. To troubleshoot configuration issues, it's important to determine the source of the error.
If you know the source of the error, you can reproduce it on the command line with haskell-language-server <the file>
. There are debug flags which might be useful (see --help
).
If you do not know the source of the error, you can test every file in the project with:
# Provide list_sources function.
source "$(dirname "$0")/../cabal-lib.sh"
# Get every file in the project.
mapfile -t srcs < <(list_sources)
# Execute haskell-language-server on every file in the project.
# Note that this command can take upwards of an hour.
haskell-language-server "${srcs[@]}"
Once you can reproduce the error, look through the Worked Examples below and see if you can resolve the issue. If you cannot, raise a GitHub issue (external) or a JIRA issue (internal) with reproduction steps and tag @sevanspowell or @rvl.
Worked Examples
NOTE: hie-bios BIOS Configuration is helpful background reading.
Source Filtering
In the past haskell-language-server failed when processing the lib/wallet/extra/Plutus/FlatInteger.hs
file, as it was technically a Haskell file in the repository, but wasn't intended to be compiled with the project.
To fix this issue, we excluded the lib/wallet/extra
folder from the project sources.
The bash function list_sources
in scripts/cabal-lib.sh
is responsible for determining the source files haskell-language-server sees. Modify this function to further remove any other files you wish to exclude:
list_sources() {
# Exclude lib/wallet/extra. Those files are Plutus scripts intended
# to be serialised for use in the tests. They are not intended to be built
# with the project.
# Exclude prototypes dir because it's a different project.
git ls-files 'lib/**/*.hs' | grep -v Main.hs | grep -v prototypes/ | grep -v lib/wallet/extra
}
GHCI Flags
There were previously issues debugging overlapping/ambiguous instances as the error message printed by haskell-language-server did not contain enough information. We rectified this by adding -fprint-potential-instances
to the GHCI flags of the haskell-language-server BIOS.
The bash function ghci_flags
in scripts/cabal-lib.sh
is responsible for providing the GHCI flags haskell-language-server uses. Modify this file with any other GHC flags you may require:
ghci_flags() {
cat <<EOF
-XOverloadedStrings
-XNoImplicitPrelude
-XTypeApplications
-XDataKinds
-fwarn-unused-binds
-fwarn-unused-imports
-fwarn-orphans
-fprint-potential-instances
-Wno-missing-home-modules
EOF
...
}
Coding Standards
Foreword
This file contains agreed-upon coding standards and best practices, as well as proposals for changes or new standards. Proposals are prefixed with [PROPOSAL]
and are voted on by the Adrestia team through polls on Slack. To be accepted, a practice should be voted with majority + 1, with neutral votes counting as positive votes.
Each proposal should start with a section justifying the standard with rational arguments. When it makes sense, we should also provide examples of good and bad practices to make the point clearer.
Summary
- Coding Standards
- Foreword
- Summary
- Code Formatting
- Haskell Practices
- Favor
newtype
and tagged type over type-aliases - Language extensions are specified on top of each module
- HLint is used for hints and general code style
- We use explicit imports by default, and favor qualified imports for ambiguous functions
- All modules begin with a helpful documentation comment
- Prefer named constants over magic numbers
- Avoid wildcards when pattern-matching on sum types
- Prefer pattern-matching to equality testing on sum types.
- [PROPOSED] Don't spit back malformed values in errors from user inputs.
- Favor
- QuickCheck
- See your property fail
- Define properties as separate functions
- Provide readable counter-examples on failures
- Tag interesting cases in complex properties
- Write properties to assert the validity of complex generators (and shrinkers)
- Use
checkCoverage
to measure coverage requirements - Avoid
liftIO
in monadic properties
- Testing
Code Formatting
Editor Configuration via .editorconfig
A .editorconfig
(see https://editorconfig.org/) at the root of the project specifies for various filetype:
- Line length
- Indentation style (spaces vs tabs)
- Encoding
This file should be parsed and enforced by any contributor's editor.
Why
This is the kind of details we don't want to be fighting over constantly. The
.editorconfig
are widely used, easily written and supported by any decent editor out there. Agreeing on such rules prevent our version control system to go crazy because people use different encoding or indentation style. It makes the overall code more consistent.
Limit line length to 80 characters
Source code, including comments, should not exceed 80 characters in length, unless in exceptional situations.
Why
- To maximize readability. Human eyes find it much harder to scan long lines. For people with imperfect vision, it can be even harder. Narrower code can be read quickly without having to scan from side to side. Although monitors have grown in width and resolution in recent years, human eyes haven't changed.
- To easily view multiple sources side-by-side. This is particularly important when working on a laptop. With a readable font size of around 11 pt, 80 characters is about half the horizontal distance across a laptop monitor. Trying to fit 90 or 100 characters into the same width requires a smaller font, raising the level of discomfort for people with poorer vision.
- To avoid horizontal scrolling when viewing source code. When reading most source code, we already have to scroll vertically. Horizontal scrolling means that readers have to move their viewpoint in two dimensions rather than one. This requires more effort and can cause strain for people reading your code.
See Examples and Exceptions
Examples
If you find yourself exceeding 80 characters, there are several strategies you can use.
Strategy 1: Wrap code
By inserting carriage returns in the right place, we can often reveal the underlying structure of an expression. Haskell allows you to break up long expressions so that they occur over multiple lines. For example:
-- BAD
instance Bi Block where
encode block = encodeListLen 3 <> encode (blockHeader block) <> encode (blockBody block) <> encode (blockExtraData block)
-- GOOD
instance Bi Block where
encode block = encodeListLen 3
<> encode (blockHeader block)
<> encode (blockBody block)
<> encode (blockExtraData block)
Another example of wrapping:
-- BAD:
describe "Lemma 2.6 - Properties of balance" $ do
it "2.6.1) dom u ⋂ dom v ==> balance (u ⋃ v) = balance u + balance v" (checkCoverage prop_2_6_1)
it "2.6.2) balance (ins⋪ u) = balance u - balance (ins⊲ u)" (checkCoverage prop_2_6_2)
-- GOOD:
describe "Lemma 2.6 - Properties of balance" $ do
it "2.6.1) dom u ⋂ dom v ==> balance (u ⋃ v) = balance u + balance v"
(checkCoverage prop_2_6_1)
it "2.6.2) balance (ins⋪ u) = balance u - balance (ins⊲ u)"
(checkCoverage prop_2_6_2)
Strategy 2: Place comments on their own line instead of attempting to align them vertically
-- BAD
mkMagicalBlock
:: MagicProtocolId -- A unique key specifying a magic protocol.
-> MagicType -- The type of magic used in this block signing.
-> MagicalKey -- The magical key used in this block signing.
-> Maybe Delegation.MagicalCertificate -- A magical certificate of delegation, in case the specified 'MagicalKey' does not have the right to sign this block.
-> Block
-- GOOD
mkMagicalBlock
:: MagicProtocolId
-- ^ A unique key specifying a magic protocol.
-> MagicType
-- ^ The type of magic used in this block signing.
-> MagicalKey
-- ^ The magical key used in this block signing.
-> Maybe Delegation.MagicalCertificate
-- ^ A magical certificate of delegation, in case the specified
-- 'MagicalKey' does not have the right to sign this block.
-> Block
Strategy 3: Break up long string literals
Haskell provides convenient support for multi-line string literals:
-- BAD
errorAccountFundsCompletelyExhausted = "The funds in this account have been completely spent, and its balance is now zero. Either add more funds to this account or use a different account for this transaction."
-- GOOD
errorAccountFundsCompletelyExhausted =
"The funds in this account have been completely spent, and its balance \
\is now zero. Either add more funds to this account or use a different \
\account for this transaction."
-- BAD:
spec = do
scenario "Only this account's balance can be retrieved while standing on one leg on the side of an extremely tall mountain, and breathing thin air with only very limited amounts of oxygen." $ do
-- GOOD:
spec = do
scenario
"Only this account's balance can be retrieved while standing on one \
\leg on the side of an extremely tall mountain, and breathing thin \
\air with only very limited amounts of oxygen." $ do
Strategy 4: Reduce nesting
If your function contains so many levels of nesting that it's hard to keep things within 80 characters (even with careful use of wrapping), consider breaking your function up into smaller parts.
Exceptions
Sometimes, it's impossible to adhere to this rule.
Here is a list of allowed exceptions:
Exception 1: URLs in comments
According to the standard, URLs can be extremely long. In some situations, we need to place URLs in source code comments. If a URL is longer than 80 characters, then place it on its own line:
--| For more information about this implementation, see:
-- https://an.exceptionally.long.url/7919ce329e804fc0bc1fa2df8a141fd3d996c484cf7a49e79f14d7bd974acadd
Use only a single blank line between top-level definitions
A source code file should not contain multiple consecutive blank lines.
Use only a single blank line between the following top-level definitions:
- function definitions
- data type definitions
- class definitions
- instance definitions
Why
- Consistency with other Haskell code.
- Excessive vertical space increases the amount of unnecessary scrolling required to read a module.
See Examples
-- BAD
newtype Foo = Foo Integer
deriving (Eq, Show)
newtype Bar = Bar Integer
deriving (Eq, Show)
-- GOOD
newtype Foo = Foo Integer
deriving (Eq, Show)
newtype Bar = Bar Integer
deriving (Eq, Show)
-- BAD
instance FromCBOR Block where
fromCBOR = Block <$> decodeBlock
newtype BlockHeader = BlockHeader
{ getBlockHeader :: Primitive.BlockHeader
}
deriving Eq
-- GOOD
instance FromCBOR Block where
fromCBOR = Block <$> decodeBlock
newtype BlockHeader = BlockHeader
{ getBlockHeader :: Primitive.BlockHeader
}
deriving Eq
Avoid Variable-Length Indentation
Variables, arguments, fields and tokens in general shouldn't be aligned based on the length of a previous token. Rather, tokens should go over a new line and be indented one-level extra when it makes sense, or not be aligned at all.
Why
Haskellers have a tendency to over-align everything vertically for the sake of readability. In practice, this is much more of an habit than a real gain in readability. Aligning content based on a function name, variable name or record field tends to create unnecessarily long diffs and needless conflicts in version control systems when making a change to add an argument, variable or parameters. Favoring new-line and fixed-length alignment plays nicer with version control.
See Examples
-- GOOD
data AddressPool address = AddressPool
{ _addresses :: !(Map address Word32)
, _gap :: !AddressPoolGap
}
-- GOOD
data AddressPool address = AddressPool
{ _addresses
:: !(Map address Word32)
, _gap
:: !AddressPoolGap
}
-- GOOD
deriveAccountPrivateKey
:: PassPhrase
-> EncryptedSecretKey
-> Word32
-> Maybe EncryptedSecretKey
deriveAccountPrivateKey passPhrase masterEncPrvKey accountIx =
-- BAD
myFunction :: Word64 -> Maybe String
myFunction w = let res = Wrap w in
case someOp res of
Left _err -> Nothing
Right () -> Just coin
-- BAD
myFunction :: Int
-> Maybe ByteString
-> Set Word32
-> Update DB (Either [Word32]
(Map Word32 ([String], Set ByteString)))
-- BAD
data MyRecord = MyRecord
{ _myRecordLongNameField :: !String
, _myRecordShort :: ![Int]
}
Stylish-Haskell is used to format grouped imports & language pragmas
Contributors' editors should pick up and enforce the rules defined by the .stylish-haskell.yaml
configuration file at the root of the project. Also, in order to maximize readability, imports
should be grouped into three groups, separated by a blank newline.
- Prelude import
- Explicit imports
- Qualified imports
Why
It is rather annoying and time-consuming to align import lines or statement as we code and it's much simpler to leave that to our editor. Yet, we do want to enforce some common formatting such that everyone gets to be aligned (pun intended).
We can use Stylish-Haskell with various set of rules, yet, the same arguments from 'Avoid Variable-Length Indentation' applies when it comes to automatic formatting. Imports are a real pain with git and Haskell when they are vertically aligned based on the imported module's name.
See examples
-- GOOD
import Prelude
import Cardano.Wallet.Binary
( txId )
import Data.Set
( Set )
import Data.Traversable
( for )
import qualified Data.Map as Map
import qualified Data.Set as Set
-- BAD
import Cardano.Wallet.Binary
( txId )
import Data.Set
( Set )
import Prelude
import Data.Traversable
( for )
import qualified Data.Map as Map
import qualified Data.Set as Set
-- BAD
import Prelude
import Cardano.Wallet.Binary
( txId )
import qualified Data.Set as Set
import Data.Set
( Set )
import qualified Data.Map as Map
import Data.Traversable
( for )
Here below is a proposal for the initial set of rules:
columns: 80 # Should match .editorconfig
steps:
- imports:
align: none
empty_list_align: inherit
list_align: new_line
list_padding: 4
long_list_align: new_line_multiline
pad_module_names: false
separate_lists: true
space_surround: true
- language_pragmas:
align: false
remove_redundant: true
style: vertical
See example
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DerivingStrategies #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE TypeFamilies #-}
module Main where
import Control.Applicative
( (<|>) )
import Control.Arrow
( first )
import Control.Concurrent.MVar
( modifyMVar_, newMVar, putMVar, readMVar, takeMVar )
import Crypto.Hash.Algorithms
( Blake2b_224, Blake2b_256, SHA3_256, SHA512 (..) )
import Lens.Micro
( at, (%~), (&), (.~), (^.) )
import Network.HTTP.Client
( Manager
, defaultRequest
, httpLbs
, path
, port
, responseBody
, responseStatus
)
import qualified Codec.CBOR.Decoding as CBOR
import qualified Codec.CBOR.Encoding as CBOR
import qualified Codec.CBOR.Read as CBOR
import qualified Codec.CBOR.Write as CBOR
import qualified Crypto.Cipher.ChaChaPoly1305 as Poly
Haskell Practices
Favor newtype
and tagged type over type-aliases
Instead of writing type aliases, one should favor wrapping up values in newtype when it makes sense, or, have them wrapped into a tagged type with a phantom type to convey some extra meaning while still preserving type safeness. By using newtypes, we actually extend our program vocabulary and increase its robustness.
Why
Type-aliases convey a false sense of type-safety. While they usually make things a bit better for the reader, they have a tendency to spread through the code-base transforming those sweet help spot into traps. We can't define proper instances on type aliases, and we treat them as different type whereas they are behind the scene, just another one.
See examples
-- GOOD
newtype HardenedIndex = HardenedIndex { getHardenedIndex :: Word32 }
deriveAccount :: HardenedIndex -> XPrv -> XPrv
-- GOOD
data Scheme = Seq | Rnd
newtype Key (* :: Scheme) = Key { getKey :: XPrv }
deriveAccount :: Word32 -> Key 'Seq -> Key 'Seq
-- GOOD
newtype Tagged (* :: Symbol) = Tagged { getTagged :: String }
startNode :: Tagged "nodeId" -> IO ()
-- BAD
type HardenedIndex = Word32
deriveAccount :: HardenedIndex -> XPrv -> XPrv
Language extensions are specified on top of each module
Haskell's language extension are specified on top of each module.
Why
Having a lot of default extensions enabled across the whole project can sometimes lead to cryptic errors where GHC would interpret things differently because of the enabled extensions. Yet, it's sometimes hard to distinguish by simply looking at the module themselves.
Also, being more explicit on extensions used by a module can help speeding up compile-time of such simple modules that don't need to be pull in a lot of extra complexity.
See examples
-- GOOD
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE DerivingStrategies #-}
module Cardano.Wallet where
-- BAD
default-extensions:
- DataKinds
- GeneralizedNewtypeDeriving
- DerivingStrategies
HLint is used for hints and general code style
Contributors' editors should pick up and enforce the rules defined by the .hlint.yaml configuration file at the root of the project. File should be committed without warnings or errors. When it make senses, developer may ignore lints at a function site using a proper annotation:
Why
Linters are common practices in software development and help maintaining consistency across a large codebase with many developers. Hlint is de de-facto linter in Haskell and comes with a lot of different rules and features that are most of the time rather relevant and convey good practices, agreed upon and shared across the team.
e.g.
{-# ANN decodeBlock ("HLint: ignore Use <$>" :: String) #-}
As a start, we'll use the following built-in rules from hlint
with the following configuration, and refine this as we move forward:
- modules:
# Enforce some common qualified imports aliases across the codebase
- {name: [Data.Aeson, Data.Aeson.Types], as: Aeson}
- {name: [Data.ByteArray], as: BA}
- {name: [Data.ByteString.Base16], as: B16}
- {name: [Data.ByteString.Char8], as: B8}
- {name: [Data.ByteString.Lazy], as: BL}
- {name: [Data.ByteString], as: BS}
- {name: [Data.Foldable], as: F}
- {name: [Data.List.NonEmpty], as: NE}
- {name: [Data.List], as: L}
- {name: [Data.Map.Strict], as: Map}
- {name: [Data.Sequence], as: Seq}
- {name: [Data.Set, Data.HashSet], as: Set}
- {name: [Data.Text, Data.Text.Encoding], as: T}
- {name: [Data.Vector], as: V}
# Ignore some build-in rules
- ignore: {name: "Reduce duplication"} # This is a decision left to developers and reviewers
- ignore: {name: "Redundant bracket"} # Not everyone knows precedences of every operators in Haskell. Brackets help readability.
- ignore: {name: "Redundant do"} # Just an annoying hlint built-in, GHC may remove redundant do if he wants
We use explicit imports by default, and favor qualified imports for ambiguous functions
Apart from the chosen prelude, there should be no implicit imports. Instead,
every function or class used from a given module should be listed explicitly.
In case where a function name is ambiguous or requires context, a qualified
import should be used instead (this is mainly the case for modules coming from
containers
, bytestring
and aeson
).
Why
Imports can be a great source of pain in Haskell. When dealing with some foreign code (and every code becomes quite hostile after a while, even if we originally wrote it), it can be hard to understand where functions and abstractions are pulled from. On the other hand, fully qualified imports can become verbose and a real impediment to readability.
See examples
-- GOOD
import Prelude
import Control.DeepSeq
( NFData (..) )
import Data.ByteString
( ByteString )
import Data.Map.Strict
( Map )
import Data.Aeson
( FromJSON (..), ToJSON (..) )
-- GOOD
import qualified Data.Map.Strict as Map
import qualified Data.ByteString as BS
isSubsetOf :: UTxO -> UTxO -> Bool
isSubsetOf (UTxO a) (UTxO b) =
a `Map.isSubmapOf` b
(magic, filetype, version) =
( BS.take 8 bytes
, BS.take 4 $ BS.drop 8 bytes
, BS.take 4 $ BS.drop 12 bytes
)
-- BAD
import Options.Applicative
-- BAD
import qualified Data.Aeson as Aeson
instance Aeson.FromJSON MyType where
-- ...
-- BAD
import Data.Map.Strict
( filter )
import Data.Set
( member )
restrictedTo :: UTxO -> Set TxOut -> UTxO
restrictedTo (UTxO utxo) outs =
UTxO $ filter (`member` outs) utxo
All modules begin with a helpful documentation comment
The comments might answer the question why? They might:
- Explain the relation to other modules
- Explain the relation to business functionality
- Provide some other good-to-know information
We should keep an eye out out-of-date comments. For instance when creating and reviewing PRs.
Why
Even if individual functions are well-documented, it can be difficult to grasp how it all fits together.
In the legacy code-base, it was common to have multiple functions with the same or similar names, in different modules. Try seaching for
applyBlocks
orswitchToFork
. What is the difference betweenDB.Spec.Update.switchToFork
andDB.AcidState.switchToFork
?Having a comment at the top of each module would be an easy-to-follow rule to better document this. It is also very appropriate for generated haddock docs.
If we re-design a module and forget to update the comment, the comment is no longer useful.
See examples
-- |
-- Copyright: © 2018-2019 IOHK
-- License: MIT
--
-- This module contains the core primitive of a Wallet. This is roughly a
-- Haskell translation of the [Formal Specification for a Cardano Wallet](https://github.com/cardano-foundation/cardano-wallet/blob/master/specifications/wallet/formal-specification-for-a-cardano-wallet.pdf)
--
-- It doesn't contain any particular business-logic code, but define a few
-- primitive operations on Wallet core types as well.
(https://github.com/cardano-foundation/cardano-wallet/blob/d3cca01f66f0abe93012343dab093a2551b6cbea/src/Cardano/Wallet/Primitive.hs#L12-L20)
-- |
-- Copyright: © 2018-2019 IOHK
-- License: MIT
--
-- Provides the wallet layer functions that are used by API layer and uses both
-- "Cardano.DBLayer" and "Cardano.NetworkLayer" to realize its role as being
-- intermediary between the three.
(https://cardano-foundation.github.io/cardano-wallet/haddock/cardano-wallet-2.0.0/Cardano-WalletLayer.html)
Prefer named constants over magic numbers
Why
A magic number (or magic value) is a value that appears in source code without an accompanying explanation, which could (preferably) be replaced with a named constant.
The use of an unnamed magic number often obscures the developer's intent in choosing that number, increases opportunities for subtle errors and makes it more difficult for the program to be adapted and extended in the future.
Replacing all significant magic numbers with named constants makes programs easier to read, understand and maintain. Named constants can also be reused in multiple places, making it obvious that the value is supposed to be the same across all places that its used.
See examples
BAD
humanReadableCharIsValid :: Char -> Bool
humanReadableCharIsValid c = c >= chr 33 && c <= chr 126
bech32CharSet :: Set Char
bech32CharSet =
Set.filter (not . isUpper) $
Set.fromList [chr 33 .. chr 126]
`Set.union` (Set.singleton '1')
`Set.union` (Set.fromList "qpzry9x8gf2tvdw0s3jn54khce6mua7l")
instance Arbitrary HumanReadableChar where
arbitrary = HumanReadableChar <$>
choose (chr 33, chr 126)
GOOD
-- | The lower bound of the set of characters permitted to appear within the
-- human-readable part of a Bech32 string.
humanReadableCharMinBound :: Char
humanReadableCharMinBound = chr 33
-- | The upper bound of the set of characters permitted to appear within the
-- human-readable part of a Bech32 string.
humanReadableCharMaxBound :: Char
humanReadableCharMaxBound = chr 126
-- | The separator character. This character appears immediately after the
-- human-readable part and before the data part.
separatorChar :: Char
separatorChar = '1'
-- | A list of all characters that are permitted to appear within the data part
-- of a Bech32 string.
dataCharList :: String
dataCharList = "qpzry9x8gf2tvdw0s3jn54khce6mua7l"
humanReadableCharIsValid :: Char -> Bool
humanReadableCharIsValid c =
c >= humanReadableCharMinBound &&
c <= humanReadableCharMaxBound
bech32CharSet :: Set Char
bech32CharSet =
Set.filter (not . isUpper) $
Set.fromList [humanReadableCharMinBound .. humanReadableCharMaxBound]
`Set.union` (Set.singleton separatorChar)
`Set.union` (Set.fromList dataCharList)
instance Arbitrary HumanReadableChar where
arbitrary = HumanReadableChar <$>
choose (humanReadableCharMinBound, humanReadableCharMaxBound)
Avoid wildcards when pattern-matching on sum types
When pattern-matching on sum types or finite structures, we should avoid
the use of the wildcard _
as much as possible, and instead favor explicit
handling of all branches. This way, we get compiler errors when extending
the underlying ADT and avoid silently handling (probably incorretly) some
of the new branches.
Why
When pattern-matching on sum types it is tempting to handle a few similar cases using a wildcard
_
. However, this often leads to undesirable behavior when adding new branches to an ADT. Compilers won't trigger any warnings and, as developers, we might miss some necessary logic updates in existing pattern matches.
See examples
-- GOOD
isPositive = \case
InLedger -> True
Pending -> False
Invalidated -> False
-- BAD
isPositive = \case
InLedger -> True
_ -> False
-- BAD
handleErr = \case
ErrWalletNotFound -> {- ... -}
_ -> ErrUnknown
Prefer pattern-matching to equality testing on sum types.
For expressions that evaluate differently depending on a value of a sum type, prefer pattern matching over equality testing for values of that type.
Why
When conditional evaluation depends on the value of a sum type, it's tempting to use a test for equality or inequality to branch on a particular value.
However, if someone adds a new constructor to the sum type later on, we'd ideally like the compiler to remind us to check all locations that inspect values of this type, especially where conditional evaluation is involved.
Using an equality test is non-ideal because the compiler won't necessarily fail if a new constructor is added to the underlying sum type, whereas it will always fail if a pattern match becomes incomplete.
See examples
data SortOrder = Ascending | Descending
deriving Eq
-- BAD
sortWithOrder' :: Ord a => SortOrder -> [a] -> [a]
sortWithOrder' order = f . sort
where
f = if order == Ascending then id else reverse
-- GOOD
sortWithOrder :: Ord a => SortOrder -> [a] -> [a]
sortWithOrder order = f . sort
where
f = case order of
Ascending -> id
Descending -> reverse
[PROPOSED] Don't spit back malformed values in errors from user inputs.
When failing to parse user inputs, error message should not contain the malformed input. Instead, the error message should contain hints or examples of well-formed values expected by the parser. It is acceptable to show a raw input value if it is known to be within acceptable boundaries (e.g. parsing a Word32
into a more refined type, there is little chance that the Word32
will be inadequate to display).
Why
Spitting back what the user has just entered is generally not very helpful. Users can generally easily replay what they've entered and see for themselves. More importantly, an input that didn't parse successfully may be arbitrary long or improper for display; since it failed to parse, we have actually not much control or knowledge about it.
See examples
-- BAD
err =
"Invalid value: " <> show v <> ". Please provide a valid value."
-- GOOD
err =
"EpochNo value is out of bounds (" <>
show (minBound @Word31) <>
", " <>
show (maxBound @Word31) <>
")."
-- GOOD
err =
"Unable to decode FeePolicy: \
\Linear equation not in expected format: a + bx + cy \
\where 'a', 'b', and 'c' are numbers"
QuickCheck
See your property fail
This is a general practice in TDD (Test Driven Development) but even more important in property-based testing. You want to see how your property fails and whether, as a developer, you have enough information to understand the reason of the failure and debug it.
Why
It is really easy to write all sort of properties which, once they fail, give close to no details about the reason why they failed. Yet, as with any tests, one wants to understand what happens. It is therefore important to see properties failing at least once to see whether the level of details is sufficient, as well as the efficiency of the shrinkers.
Define properties as separate functions
It is often tempting to write properties inlined with hspec
other combinators
instead of having them as separate functions. However, we recommend writing
properties as separate functions, prefixed with a prop_
prefix to clearly
identify them.
Why
It makes for more readable test files where the set of properties can be easily identified by looking at the top-level exported spec. But more importantly, it allows for re-using the property with some regression test cases coming from past failures. Having a separate function makes it easy to simply apply counter examples yielded by QuickCheck as arguments!
See examples
-- GOOD
describe "selectCoinsForMigration properties" $ do
it "Total input UTxO value >= sum of selection change coins" $
property $ withMaxSuccess 1000 prop_inputsGreaterThanOutputs
describe "selectCoinsForMigration regressions" $ do
it "regression #1" $ do
let feeOpts = FeeOptions
{ dustThreshold = Coin 9
, estimateFee = \s -> Fee
$ fromIntegral
$ 5 * (length (inputs s) + length (outputs s))
}
let batchSize = 1
let utxo = UTxO $ Map.fromList
[ ( TxIn { inputId = Hash "|\243^\SUBg\242\231\&1\213\203", inputIx = 2 }
, TxOut { address = Address "ADDR03", coin = Coin 2 }
)
]
property $ prop_inputsGreaterThanOutputs feeOpts batchSize utxo
-- | Total input UTxO value >= sum of selection change coins
prop_inputsGreaterThanOutputs
:: FeeOptions
-> Word8
-> UTxO
-> Property
prop_inputsGreaterThanOutputs feeOpts batchSize utxo = {- ... -}
-- BAD
it "Eventually converge for decreasing functions" $ do
property $ \coinselOpts -> do
let batchSize = idealBatchSize coinselOpts
label (show batchSize) True
Provide readable counter-examples on failures
Use counterexample to display human-readable
counter examples when a test fails; in particular, for data-types which have a Buildable instances
that are typically hard to read through their standard Show
instance. For monadic properties, this can be used via monitor.
Why
Some property-based tests can use complex combinations of inputs that can be hard to decipher when printed out to the console using only the stock
Show
instance. On the other hand, we want to keep using the stockShow
instance in order to be able to easily copy-paste failing cases and turn them into regression tests. QuickCheck however provides a good set of tools to display counter examples on failures to ease debbugging.
See examples
property (Bech32.decode corruptedString `shouldSatisfy` isLeft)
& counterexample $ unlines
[ "index of char #1: " <> show index
, "index of char #2: " <> show (index + 1)
, " char #1: " <> show char1
, " char #2: " <> show char2
, " original string: " <> show originalString
, "corrupted string: " <> show corruptedString
]
property (bs' === Just expected)
& counterexamples $ unlines
[ "k = " ++ show k
, "Local chain: " ++ showChain localChain
, "Node chain: " ++ showChain nodeChain
, "Intersects: " ++ maybe "-" showSlot isect
, "Expected: " ++ showBlockHeaders expected
, "Actual: " ++ maybe "-" showBlockHeaders bs'
]
Tag interesting cases in complex properties
Quickcheck provides good tooling for labelling (see label and classifying (see classify) inputs or results of a property. These should be used in properties dealing with several classes of values.
Why
It is quite common for properties to deal with different class of values and for us developers to get a false sense of coverage. QuickCheck default generators are typically skewed towards certain edge values to favor bug finding but this is sometimes counter-intuitive. For example, when testing with lists or maps, it often happens that most of the test cases are actually testing on empty values. In order to make sure that some interesting test cases are still covered, it is necessary to instrument properties so that they can mesure how often certain cases appear in a particular property.
See Examples
prop_sync :: S -> Property
prop_sync s0 = monadicIO $ do
{- ... -}
monitor (label (intersectionHitRate consumer))
monitor (classify (initialChainLength (const (== 1))) "started with an empty chain")
monitor (classify (initialChainLength (\k -> (> k))) "started with more than k blocks")
monitor (classify addMoreThanK "advanced more than k blocks")
monitor (classify rollbackK "rolled back full k")
monitor (classify (switchChain (<)) "switched to a longer chain")
monitor (classify (switchChain (>)) "switched to a shorter chain")
monitor (classify (switchChain (const (== 0))) "rewinded without switch")
monitor (classify (recoveredFromGenesis s) "recovered from genesis")
monitor (classify (startedFromScratch c0Cps) "started from scratch")
{- ... -}
-- Syncs with mock node
-- 64.709% started from scratch
-- 55.969% advanced more than k blocks
-- 53.260% started with an empty chain
-- 41.094% started with more than k blocks
-- 10.421% switched to a shorter chain
-- 7.195% switched to a longer chain
-- 6.773% rewinded without switch
-- 0.880% rolled back full k
--
-- 57.516% Intersection hit rate GREAT (75% - 100%)
-- 32.183% Intersection hit rate GOOD (50% - 75%)
-- 10.292% Intersection hit rate POOR (10% - 50%)
-- 0.009% Intersection hit rate BAD (0% - 10%)
prop_rollbackPools db pairs = monadicIO $ do
{- ... -}
Monitor $ classify (any (> sl) beforeRollback) "something to roll back"
Monitor $ classify (all (<= sl) beforeRollback) "nothing to roll back"
{- ... -}
-- Rollback of stake pool production
-- 57% nothing to roll back
-- 43% something to roll back
prop_accuracy r = withMaxSuccess 1000 $ monadicIO $ do
{- ... -}
monitor $ label $ accuracy dust balance balance'
where
accuracy :: Coin -> Natural -> Natural -> String
accuracy (Coin dust) sup real
| a >= 1.0 =
"PERFECT (== 100%)"
| a > 0.99 || (sup - real) < fromIntegral dust =
"OKAY (> 99%)"
| otherwise =
"MEDIOCRE (<= 99%)"
where
a = double real / double sup
-- Accuracy of selectCoinsForMigration
-- dust=1%
-- +++ OK, passed 1000 tests (100.0% PERFECT (== 100%)).
-- dust=5%
-- +++ OK, passed 1000 tests (100.0% PERFECT (== 100%)).
-- dust=10%
-- +++ OK, passed 1000 tests:
-- 99.8% PERFECT (== 100%)
-- 0.2% OKAY (> 99%)
-- dust=25%
-- +++ OK, passed 1000 tests:
-- 99.6% PERFECT (== 100%)
-- 0.4% OKAY (> 99%)
-- dust=50%
-- +++ OK, passed 1000 tests:
-- 98.8% PERFECT (== 100%)
-- 1.2% OKAY (> 99%)
Write properties to assert the validity of complex generators (and shrinkers)
Arbitrary generators, and in particular complex ones, should be tested independently to make sure they yield correct values. This also includes shrinkers associated with the generator which can often break some invariants enforced by the generator itself.
Why
Generators and shrinkers are at the heart of property-based testing. Writing properties using clunky generators will lead to poor or wrong results. Above all, it may take an important amount of time to debug failures due to an invalid generator. So it's best to start by verifying the a given generator is somewhat correct. Often enough, generators are obvious, but when they are slightly more engineered, testing them is a must.
See Examples
--| Checks that generated mock node test cases are valid
prop_MockNodeGenerator :: S -> Property
prop_MockNodeGenerator (S n0 ops _ _) =
prop_continuous .&&. prop_uniqueIds
where
prop_continuous :: Property
prop_continuous =
conjoin (follow <$> scanl (flip applyNodeOp) n0 (concat ops))
prop_uniqueIds :: Property
prop_uniqueIds =
length (nub bids) === length bids
& counterexample ("Non-unique ID: " ++ show bids)
where
bids = concat [map mockBlockId bs | NodeAddBlocks bs <- concat ops]
prop_nonSingletonRangeGenerator :: NonSingletonRange Int -> Property
prop_nonSingletonRangeGenerator = property $ \(nsr :: NonSingletonRange Int) ->
isValidNonSingleton nsr .&&. all isValidNonSingleton (shrink nsr)
where
isValidNonSingleton (NonSingletonRange r) =
rangeIsValid r && not (rangeIsSingleton r) in
Use checkCoverage
to measure coverage requirements
Using label or classify instruments QuickCheck to gather some metrics about a particular properties and print out results in the console. However, it also possible to enforce that some collected values stay above a certain threshold using checkCoverage. When used, QuickCheck will run the property as many times as necessary until a particular coverage requirement is satisfied, with a certain confidence.
Why
Labelling and classifying is good but, in an evolving code-base where generators are sometimes shared between multiple properties, it is possible for someone to accidentally make a generator worse for an existing property without noticing it. Therefore, by enforcing clear coverage requirements with
checkCoverage
, one can make a property fail if the coverage drops below an acceptable threshold. For example, a property can measure the proportion of empty lists its generator yield and require that at least 50% of all generated list are not empty.
See examples
prop_rangeIsValid :: Property
prop_rangeIsValid = property $ \(r :: Range Integer) ->
rangeIsValid r .&&. all rangeIsValid (shrink r)
& cover 10 (rangeIsFinite r) "finite range" $
& checkCoverage
spec :: Spec
spec = do
describe "Coin selection properties : shuffle" $ do
it "every non-empty list can be shuffled, ultimately" $
checkCoverageWith lowerConfidence prop_shuffleCanShuffle
it "shuffle is non-deterministic" $
checkCoverageWith lowerConfidence prop_shuffleNotDeterministic
it "sort (shuffled xs) == sort xs" $
checkCoverageWith lowerConfidence prop_shufflePreserveElements
where
lowerConfidence :: Confidence
lowerConfidence = Confidence (10^(6 :: Integer)) 0.75
Avoid liftIO
in monadic properties
When running monadic properties in IO, it is often required to lift a
particular IO action. Unfortunately, the PropertyM
monad in which the
monadic properties are defined have a MonadIO
instance so using liftIO
is
tempting. However, one should use run
in order to lift operation in the property monad.
Why
This is very important if the property also contains calls to
monitor
,label
,counterexample
and so forth... UsingliftIO
actually breaks the abstraction boundary of the property monad which then makes the reporting with these combinator ineffective. Usingrun
however correctly inserts monadic operations and preserve reporting and measures done during the property.
See examples
-- GOOD
setup wid meta = run $ do
cleanDB db
unsafeRunExceptT $ createWallet db wid cp0 meta mempty
unsafeRunExceptT $ putTxHistory db wid txs0
prop wid point = do
run $ unsafeRunExceptT $ rollbackTo db wid point
txs <- run $ readTxHistory db wid Descending wholeRange Nothing
monitor $ counterexample $ "\nTx history after rollback: \n" <> fmt txs
{- ... -}
-- BAD
prop_createWalletTwice db (key@(PrimaryKey wid), cp, meta) =
monadicIO (setup >> prop)
where
setup = liftIO (cleanDB db)
prop = liftIO $ do
let err = ErrWalletAlreadyExists wid
runExceptT (createWallet db key cp meta mempty) `shouldReturn` Right ()
runExceptT (createWallet db key cp meta mempty) `shouldReturn` Left err
Testing
Test files are separated and self-contained
Test files do not import other test files. Arbitrary instances are not shared across test files and are defined locally. If we do observe a recurring pattern in tests (like for instance, testing roundtrips), we may consider making this a library that test can import.
Why
It is really easy to make the testing code more complex than the actual code it's initially testing. Limiting the interaction between test modules helps keeping a good maintainability and a rather low overhead when it comes to extend, modify, read or comprehend some tests. Also, in many cases, we do actually want to have different arbitrary generators for different test cases so sharing instances is risky and cumbersome.
Unit test files names match their corresponding module
Every module from a library has a corresponding test file, within the same folder architecture, and, sharing a same name prefix. Test files are postfixed with 'Spec' to distinguish them from their corresponding sources.
Why
It is much easier to find the corresponding test to a module if they share a same name. Also, this gives consistency and a clear pattern for naming tests in order to avoid chaos.
See examples
src/
├── Cardano
│ ├── Environment.hs
│ └── Wallet
│ ├── Binary
│ │ └── HttpBridge.hs
│ ├── Compatibility
│ │ └── HttpBridge.hs
│ ├── Network
│ │ ├── HttpBridge
│ │ │ └── Api.hs
│ │ └── HttpBridge.hs
│ └── Transaction
│ └── HttpBridge.hs
├── Data
│ └── Packfile.hs
└── Servant
└── Extra
└── ContentTypes.hs
test/unit/
├── Cardano
│ ├── EnvironmentSpec.hs
│ └── Wallet
│ ├── Binary
│ │ └── HttpBridgeSpec.hs
│ ├── Network
│ │ ├── HttpBridge
│ │ │ └── ApiSpec.hs
│ │ └── HttpBridgeSpec.hs
│ └── Transaction
│ └── HttpBridgeSpec.hs
├── Data
│ └── PackfileSpec.hs
└── Servant
└── Extra
└── ContentTypesSpec.hs
Logging Guidelines
Logs are primarily for users of the cardano-wallet, not its developers. Such users could be:
- Engineers who will be running the cardano-wallet in production.
- Developers integrating cardano-wallet into their own software.
- Technical support staff who need to help solve end users' problems.
Users of logging will have a reasonable understanding of how the cardano-wallet works, but won't be familiar with the code. They need enough information to be able scan the logs and see if the application is running normally, or if they need to do some action to fix their problem.
Logging levels
The iohk-monitoring-framework defines a number of severity levels:
Severity Level | Meaning |
---|---|
Debug | detailed information about values and decision flow |
Info | general information of events; progressing properly |
Notice | needs attention; something not progressing properly |
Warning | may continue into an error condition if continued |
Error | unexpected set of event or condition occurred |
Critical | error condition causing degrade of operation |
Alert | a subsystem is no longer operating correctly, likely requires manual intervention |
Emergency | at this point, the system can never progress without additional intervention |
This design was influenced by the syslog
severity
level taxonomy.
These are probably more than we need. To keep things simple, we usually only log with the first 5:
Severity Level | Meaning |
---|---|
Debug | Messages that contain information normally of use only when debugging a program. |
Info | Informational messages. |
Notice | Normal but significant conditions. |
Warning | A situation that may become an error. |
Error | Unexpected problem. |
Debug
Debug logging will not be enabled by default in production. Therefore, you should not log information directly relevant to production operations at this level.
However, the technical support desk may request users enable debug logging for certain components to get more detail while trying to diagnose problems in the field.
Such logging might contain:
- Information about which program decisions were made and why.
Don't debug log too much rubbish because somebody will need to read it. Don't log so much data that the program will malfunction if debug logging has been enabled.
It is useful to have debug logging of about how the wallet interacts with other systems. Examples of such logs:
- Network requests made to node backend.
- SQL query logs -- log the queries but not the values.
Note from Alexander Diemand:
Another feature that might help reduce logging volume: monitoring can watch an observable and compare it to a set threshold. Once this is exceeded it can create an action: i.e. an alert message or change the severity filter of a named context or globally. So one would start up the node with logging filter set to "Warning" and only change it to "Info" when some monitored observable has exceeded its threshold
For an exchange that runs the node for ages: another monitor could also restore the severity filter to e.g. "Warning" if the observed values return into a normal behaviour range of values.
Info
Normal activity within the application. We do need to ensure that INFO logs are pertinent, and there are not pages and pages of logs per slot.
Examples:
- HTTP requests
- Wallet-related events
- A block was applied
- New wallet was created
- Wallet restoration status
- Payment was sent
- Payment was received
- Network-related events
- Sync state
- Information about the configuration, such as config file location, or how the wallet was started.
Notice
These are occasional messages about significant events for the program.
Examples:
- The start up message, including the application version.
- What TCP port the API server is listening on.
- Normal shut down.
- Creating a new database file or migrating data.
Warning
A warning situation could lead to a future error, or other undesirable behaviour. The user should be able to do something to make the warning go away.
Examples:
- NTP drift?
- Low disk space?
Error
This is a serious system problem. It should not happen under normal circumstances. Some action is required to rectify the situation. The wallet might well exit after logging an error message.
Examples:
- Unexpected error communicating with node backend.
- IO error writing database or other state.
- Bad user input which means that the action cannot complete.
- Unhandled exception.
Mapping of log levels to appearance
CLI tools
Logging from the CLI should not be cluttered with timestamps and other metadata. It should just look like a normal print statements in a program. Nonetheless, using the logging framework for CLI output helps because error messages can be automatically coloured.
Severity | Format |
---|---|
Debug | Hidden unless enabled by the user |
Info | Printed normally on stdout |
Notice | Printed in bold on stdout |
Warning | Printed in bold colour on stderr prefixed by Warning: |
Error | Printed in bold red on stderr prefixed by ERROR: |
Server
Severity | Format |
---|---|
Debug | Hidden unless enabled by the user |
The rest | As per defaults for iohk-monitoring-framework |
The server will support also support a log config file where the output location and format can be fully customised.
Colour
If the log output file is not a terminal, do not output ANSI colour codes. These escape codes make it difficult to analyse log files.
JSON
JSON (one object per line) is an OK format for log files, but it's
pretty bad for printing the terminal. For example, consider how
difficult it would be to read through JSON messages within journalctl -u cardano-wallet.service
. Therefore, log messages to
stdout
/stderr
should be formatted as text, unless otherwise
configured by the user.
Context
Some context may be applicable to log events:
- Current slot number.
- Wallet ID.
- A request ID for the API, so that requests, their handlers, and responses can be cross-referenced.
Logging Hierarchy
Log Traces can have context added such as a new name. Different
subsystems of the wallet (such as network, database, api) should use
appendName
to create subtraces.
Observables
The following are examples of things which should be easy and useful
to log as micro-benchmarks (bracketObserveIO
).
- Time taken for API to respond to request.
- Time taken to execute SQL query.
- Time taken for node backend to respond to request.
Privacy
iohk-monitoring-framework provides "sensitive" variants of log
functions (e.g. logInfoS
). These allow sensitive information to be
logged into separate files. Then users may send their logs to the
technical support desk with a reasonable assurance of privacy.
Note: The privacy guidelines are under discussion. We are considering making it simpler and only having two classifications: "Log" or "Do Not Log".
Public information
- Wallet ID.
- Dates and times that events occurred.
- Total number of addresses/transactions/UTxOs/etc.
- IP addresses of other nodes that network nodes are communicating with.
Private information
- Transaction ID.
- Address.
- Public key material.
- Quantities of Ada.
Unknown
Undecided about which category these fall into.
- Wallet name
Never log
This information should never be included in log messages (including DEBUG messages):
- Private key material.
- Mnemonic sentences.
- Passphrases.
- The values used or returned by SQL queries.
- Raw data sent to or received from the network backend.
- Raw, un-filtered API request or response bodies.
Increasing logging detail
iohk-monitoring-framework provides a facility for adjusting log levels at runtime on a component by component basis.
However it's probably OK for now to change log levels by restarting cardano-wallet with different command-line options.
Structured logging tutorial
Rather than logging unstructured text, define a type for log messages of a module.
data FooMsg
= LogFooInit
| LogFooError FooError
| LogFooIncomingEvent Int
deriving (Show, Eq)
Then for human-readable logs use our ToText
class.
import Data.Text.Class
( ToText (..) )
instance ToText FooMsg where
toText msg = case msg of
LogFooInit -> "The foo has started"
LogFooError e -> "foo error: " <> T.pack (show e)
LogFooIncomingEvent -> "Incoming foo event " <> T.pack (show e)
Finally, define the metadata which the switchboard needs to route these traces.
import Cardano.BM.Data.LogItem
( LoggerName, PrivacyAnnotation (..) )
import Cardano.BM.Data.Severity
( Severity (..) )
import Cardano.BM.Data.Tracer
( DefinePrivacyAnnotation (..), DefineSeverity (..) )
-- Everything is public by default
instance DefinePrivacyAnnotation FooMsg
instance DefineSeverity FooMsg
defineSeverity msg = case msg of
LogFooInit -> Debug
LogFooError _ -> Error
LogFooIncomingEvent -> Info
To use the logging in the doFoo
function:
import Cardano.BM.Trace
( Trace )
import Cardano.Wallet.Logging
( logTrace )
doFoo :: Trace IO FooMsg -> IO ()
doFoo tr = do
logTrace tr LogFooInit
onFooEvent $ \ev -> case ev of
Right n -> logTrace tr $ LogFooIncomingEvent n
Left e -> logTrace tr $ LogFooError e
To convert to Trace m FooMsg
to Trace m Text
(well actually - the
other way around), use Cardano.Wallet.Logging.transformTextTrace
,
like so:
import Control.Tracer
( contramap )
import Cardano.Wallet.Logging
( transformTextTrace )
import Foo (doFoo)
mainApp :: Trace IO Text -> IO ()
mainApp tr = doFoo (transformTextTrace tr)
To convert a Trace m FooMsg
to anything else, use contramap
.
data AppMsg
= FooMsg FooMsg
| BarMsg BarMsg
| TextMsg Text
mainApp :: Trace IO AppMsg -> IO ()
mainApp tr = doFoo (contramap (fmap FooMsg) tr)
Swagger
The OpenAPI 3.0 (Swagger) specification of cardano-wallet is located at specifications/api/swagger.yaml.
Viewing online
To view the file online, use:
-
http-api for the latest released version.
-
ReDoc viewer for the
master
branch version.
Validating locally
To validate from your local git working tree, use this command. It is the same as what Buildkite runs.
openapi-spec-validator --schema 3.0.0 specifications/api/swagger.yaml
If you don't have the validator tool installed, it is available within nix-shell
.
It may also be convenient to edit the YAML while validating under the Steel Overseer file watching tool:
sos specifications/api/swagger.yaml -c 'openapi-spec-validator --schema 3.0.0 \0'
References
Specifying Exceptions with Servant and Swagger
Contents
Goal
To be able to exert fine-grained control over errors defined in Swagger API specifications.
Background
Our API is defined in Haskell with Servant, and translated into Swagger format using the servant-swagger
library.
Swagger makes it possible to specify that an endpoint can return one or more errors. For example, the following specification states that the endpoint can return a 404
(not found) error:
"responses" :
{ "200" : { "schema" : {"$ref" : "#/definitions/Location" }
, "description" : "the matching location" } }
, "404" : { "description" : "a matching location was not found" }
By default, Servant doesn't provide a way for API authors to manually specify errors they might wish to return. However, this might be desirable: consider the case where you'd like to perform validation based on constraints that are not conveniently expressible in the Haskell type system. In this case, you would reject input at run-time, but this would typically not be reflected in the Servant type.
Since Servant itself doesn't provide a way to manually specify errors, and since it is typical to define errors when writing a Swagger specification, servant-swagger
takes the approach of auto-generating errors when various Servant combinators appear in the API. For example, when a Capture
combinator is used, servant-swagger
automatically inserts a 404
(not found) error in the generated Swagger output.
However, auto-generation of error responses has two problems:
- The generated error responses are sometimes incomplete.
- The generated error responses are sometimes inappropriate.
Example
Consider the following endpoint, allowing the caller to add a new Location
to a location database:
type AddLocation = "location"
:> "add"
:> Summary "Add a new location"
:> Capture "locationName" Text
:> Put '[JSON] Location
By default, the generated Swagger output includes a 404 error (not found):
"/location/add/{locationName}" :
{ "put" :
{ "parameters" : [ { "in" : "path"
, "type" : "string"
, "name" : "locationName"
, "required" : true } ]
, "responses" :
{ "200" : { "schema" : { "$ref" : "#/definitions/Location" }
, "description" : "the added location" }
, "404" : { "description" : "`locationName` not found" } }
, "summary" : "Add a new location"
, "produces" : ["application/json;charset=utf-8"] } }
In the above example:
- The generated error is inappropriate. Since we're adding a new location (and not looking up an existing location), we don't want to ever return a
404
. - The error we really want is missing. We'd like to perform various validation checks on the new location name, and possibly return a
400
error if validation fails. However, this isn't included in the generated Swagger output.
What do we really want?
Suppose that adding a Location
can fail in two ways, either because:
- the location name is too short; or
- the location name contains invalid characters.
We'd ideally like for the "responses"
section to reflect the above modes of failure:
"responses" :
{ "200" : { "schema" : { "$ref" : "#/definitions/Location" }
, "description" : "the added location" }
, "400" : { "description" :
"the location name was too short
OR
the location name contained invalid characters" } }
How can we achieve this?
The servant-checked-exceptions
package defines the Throws
combinator, making it possible to specify individual exceptions as part of the endpoint definition.
Let's have a look at how we might use the Throws
combinator to define our modes of failure:
type AddLocation = "location"
:> "add"
:> Summary "Add a new location"
:> Throws LocationNameHasInvalidCharsError
:> Throws LocationNameTooShortError
:> Capture "locationName" Text
:> Put '[JSON] Location
data LocationNameTooShortError = LocationNameTooShortError
deriving (Eq, Generic, Read, Show)
data LocationNameHasInvalidCharsError = LocationNameHasInvalidCharsError
deriving (Eq, Generic, Read, Show)
The above type specifies an endpoint that can throw two different types of exception.
It's possible to assign specific response codes to individual exceptions by defining ErrStatus
instances. In our example, both exceptions will share the same response code 400
(bad request):
instance ErrStatus LocationNameHasInvalidCharsError where
toErrStatus _ = toEnum 400
instance ErrStatus LocationNameTooShortError where
toErrStatus _ = toEnum 400
For client code that's written in Haskell, the servant-checked-exceptions
library provides the very useful catchesEnvelope
function, allowing the caller to perform exception case analysis on values returned by an API.
So far so good.
However there are two problems that we need to solve:
servant-swagger
doesn't know what to do with theThrows
combinator.servant-swagger
inserts its own default error response codes.
In the sections below, we attempt to solve these errors.
Adding custom errors to the generated Swagger output
Recall that Swagger error definitions include a description:
"400" : { "description" : "the location name was too short" }
By default, servant-checked-exceptions
doesn't provide a way to define descriptions for exceptions. We can solve this by defining our own ErrDescription
class, and providing instances:
class ErrDescription e where
toErrDescription :: e -> Text
instance ErrDescription LocationNameHasInvalidCharsError where
toErrDescription _ =
"the location name contained non-alphabetic characters"
instance ErrDescription LocationNameTooShortError where
toErrDescription _ =
"the location name was too short"
To include these descriptions in the generated Swagger output, we need to define a new instance of the HasSwagger
type class:
type IsErr err = (ErrDescription err, ErrStatus err)
instance (IsErr err, HasSwagger sub) => HasSwagger (Throws err :> sub)
where
toSwagger _ =
toSwagger (Proxy :: Proxy sub) &
setResponseWith
(\old _ -> addDescription old)
(fromEnum $ toErrStatus (undefined :: err))
(return $ mempty & description .~ errDescription)
where
addDescription = description %~ ((errDescription <> " OR ") <>)
errDescription = toErrDescription (undefined :: err)
Note that in the above instance, if multiple errors share the same response code, then we concatenate together the descriptions, separating the descriptions with " OR "
.
Let's have a look at the auto-generated output:
"responses" :
{ "200" : { "schema" : { "$ref" : "#/definitions/Location" }
, "description" : "the added location" }
, "400" : { "description" :
"the location name was too short
OR
the location name contained invalid characters" }
, "404" : { "description" : "`locationName` not found" } }
The 400
section now contains what we want.
However, the unwanted 404
section is still there. Recall that this is generated automatically by servant-swagger
. How can we remove this default error response?
Removing default errors from the generated Swagger output
Currently, servant-swagger
doesn't provide a way to disable the generation of default error responses. There are several possible ways to solve this:
-
Provide a patch to
servant-swagger
that adds a configuration option to disable the generation of default error responses. See here for a simple fork that disables all default error responses. -
Define a new alternative
Capture
operator and associatedHasSwagger
instances that don't generated default error responses. The downside is that this might require us to define instances for multiple other type classes. -
Amend the
HasSwagger
instance ofThrows
to detect and erase any default error responses. This solution would be rather brittle, as it would require theThrows
combinator to appear at a particular place in endpoint definition. -
Add a new combinator that disables the generation of default error responses. This solution would also be rather brittle, as it would require the new combinator to appear at a particular place in the endpoint definition.
Complete working example project
See the following example project for a complete working example:
https://github.com/jonathanknowles/servant-checked-exceptions-example
Note that the above example also uses a patched version of servant-client
, to allow pattern matching on error responses with the catchesEnvelope
function.
Nix
Nix is a package manager and build tool. It is used in cardano-wallet
for:
- Provisioning dependencies in Buildkite CI.
- Reproducible development environments (nix develop).
Nix is required for cardano-wallet
- development
- building of executables
- building of the docker image
Installing/Upgrading Nix
The minimum required version of Nix is 2.5.
Binary cache
To improve build speed, it is highly recommended (but not mandatory) to configure the binary cache maintained by IOG.
See iohk-nix/docs/nix.md or cardano-node/doc/getting-started/building-the-node-using-nix.md for instructions on how to configure the IOG binary cache on your system.
Building with Nix
See the Building page.
Reproducible Development Environment
Run nix develop
to get a build environment which includes all
necessary compilers, libraries, and development tools.
This uses the devShell
attribute of flake.nix
.
Full instructions are on the Building page.
Development tips
Code generation
The Nix build depends on code which is generated from the Cabal files. If you change these files, then you will probably need to update the generated files.
To do this, run:
./nix/regenerate.sh
Then add and commit the files that it creates.
Alternatively, wait for Buildkite to run this same command. It will produce a patch, and also push a commit with updates back to your branch.
Haskell.nix pin
The Nix build also depends on the Haskell.nix build infrastructure.
It may be necessary to update haskell.nix
when moving to a
new Haskell LTS version or adding Hackage dependencies.
To update to the latest version, run the following command:
$ nix flake lock --update-input haskellNix
warning: updating lock file '/home/rodney/iohk/cw/flake/flake.lock':
• Updated input 'haskellNix':
'github:input-output-hk/haskell.nix/f05b4ce7337cfa833a377a4f7a889cdbc6581103' (2022-01-11)
→ 'github:input-output-hk/haskell.nix/a3c9d33f301715b162b12afe926b4968f2fe573d' (2022-01-17)
• Updated input 'haskellNix/hackage':
'github:input-output-hk/hackage.nix/d3e03042af2d0c71773965051d29b1e50fbb128e' (2022-01-11)
→ 'github:input-output-hk/hackage.nix/3e64c7f692490cb403a258209e90fd589f2434a0' (2022-01-17)
• Updated input 'haskellNix/stackage':
'github:input-output-hk/stackage.nix/308844000fafade0754e8c6641d0277768050413' (2022-01-11)
→ 'github:input-output-hk/stackage.nix/3c20ae33c6e59db9ba49b918d69caefb0187a2c5' (2022-01-15)
warning: Git tree '/home/rodney/iohk/cw/flake' is dirty
Then commit the updated flake.lock file.
When updating Haskell.nix, consult the ChangeLog file. There may have been API changes which need corresponding updates in cardano-wallet
.
iohk-nix pin
The procedure for updating the iohk-nix
library of common code is much the same as for Haskell.nix. Run this command and commit the updated flake.lock
file:
$ nix flake lock --update-input iohkNix
It is not often necessary to update iohk-nix
. Before updating, ask devops whether there may be changes which affect our build.
Common problems
Warning: dumping very large path
warning: dumping very large path (> 256 MiB); this may run out of memory
Make sure you don't have large files or directories in your git worktree.
When building, Nix will copy the project sources into
/nix/store
. Generated folders such as dist-newstyle
will be filtered
out, but everything else will be copied.
Nix flake
Status
Accepted - ADP-983
Context
The DevOps team have contributed a PR which converts the Nix build to the new Nix flake format.
This blog series provides some background information on flakes.
DevOps team also wish to convert Daedalus to use Nix flakes. For this to work well, it's better that Daedalus dependencies such as cardano-wallet are also defined as flakes.
The tl;dr for flakes is that default.nix
and shell.nix
are deprecated in favour of flake.nix
.
You type nix build .#cardano-wallet
instead of nix-build -A cardano-wallet
and you type nix develop
instead of nix-shell
.
Decision
Review and merge PR #2997, since flakes seem to be the future and offer some benefits to developers.
This will add a flake.nix
file, and replace default.nix
, shell.nix
, and release.nix
with backwards compatibility shims.
- Pay close attention to any broken builds or CI processes which may result from these changes.
- Ensure that the documentation is updated with the new build commands.
- Notify developers that the Nix build files have changed, and they may need to modify the commands which they use.
- After a while, remove
default.nix
andshell.nix
and associated compatibility code.
Consequences
- Developers and users of the Nix build will now need Nix version 2.4 at least - Nix 2.5 is probably better.
- The
nix build
CLI for building with flakes seems nicer than the oldnix-build
. - Apparently,
nix develop
has better caching thannix-shell
, and so it will be faster to start the dev environment. - The process for updating Nix dependencies (e.g. Haskell.nix) is easier and more sane -
nix flake lock
. - Other PRs which are open may need to be rebased on latest
master
for their CI to pass.
How – Processes
Testing
Pre-requisites for a fast development cycle
Enter a valid shell with:
nix develop
before running the tests. This will bring into scope all the necessary tools and dependencies.
Unit Tests
just unit-tests-cabal
matching the test title
just unit-test-cabal-match "something matching the title"
Integration Tests
babbage era
just babbage-integration-tests-cabal
matching the test title
just babbage-integration-tests-cabal-match "something matching the title"
conway era
just conway-integration-tests-cabal
matching the test title
just conway-integration-tests-cabal-match "something matching the title"
Environment Variables
Several environment variables control debugging features of the integration tests and test cluster.
Variable | Type | Meaning | Default |
---|---|---|---|
CARDANO_WALLET_PORT | number | Set a specific port for the wallet HTTP server | Random unused port |
NO_CLEANUP | bool | Leave the temporary directory after tests have finished. | Delete directory on exit |
CARDANO_WALLET_TRACING_MIN_SEVERITY | severity | Log level for the cardano-wallet server under test. | Critical |
CARDANO_NODE_TRACING_MIN_SEVERITY | severity | Log level for the test cluster nodes | Info |
TESTS_TRACING_MIN_SEVERITY | severity | Log level for test suite and cluster | Notice |
TESTS_LOGDIR | path | Write log files in the given directory | Log files are written to the tests temp directory |
TESTS_RETRY_FAILED | bool | Enable retrying once of failed tests | No retrying |
TOKEN_METADATA_SERVER | URL | Use this URL for querying asset metadata | Asset metadata fetching disabled |
NO_CACHE_LISTPOOLS | bool | Do not cache pool listing retrieved from cardano-node. Testing only. Use --no-cache-listpools command line for executable. | Stake distribution is cached to improve responsiveness |
CACHE_LISTPOOLS_TTL | number | Cache time to live (TTL) for pool listing. Testing only. Use --no-cache-listpools command line for executable. | 6 seconds for test builds |
Here are the possible values of different types of environment variables:
Type | Values |
---|---|
bool | unset or empty ⇒ false, anything else ⇒ true |
severity | debug, info, notice, warning, error, critical |
Logging and debugging
If your test has failed, viewing the logs often helps. They are written to file in the integration tests temporary directory.
To inspect this directory after the tests have finished, set the NO_CLEANUP
variable.
Here is an example tree
/tmp/test-8b0f3d88b6698b51
├── bft
│ ├── cardano-node.log
│ ├── db
│ ├── genesis.json
│ ├── node.config
│ ├── node-kes.skey
│ ├── node.opcert
│ ├── node.socket
│ ├── node.topology
│ └── node-vrf.skey
├── pool-0
│ ├── cardano-node.log
│ ├── db
│ ├── dlg.cert
│ ├── faucet.prv
│ ├── genesis.json
│ ├── kes.prv
│ ├── kes.pub
│ ├── metadata.json
│ ├── node.config
│ ├── node.socket
│ ├── node.topology
│ ├── op.cert
│ ├── op.count
│ ├── op.prv
│ ├── op.pub
│ ├── pool.cert
│ ├── sink.prv
│ ├── sink.pub
│ ├── stake.cert
│ ├── stake.prv
│ ├── stake.pub
│ ├── tx.raw
│ ├── tx.signed
│ ├── vrf.prv
│ └── vrf.pub
├── pool-1
│ ├── cardano-node.log
│ ├── db
│ ├── dlg.cert
│ ├── faucet.prv
│ ├── genesis.json
│ ├── kes.prv
│ ├── kes.pub
│ ├── metadata.json
│ ├── node.config
│ ├── node.socket
│ ├── node.topology
│ ├── op.cert
│ ├── op.count
│ ├── op.prv
│ ├── op.pub
│ ├── pool.cert
│ ├── sink.prv
│ ├── sink.pub
│ ├── stake.cert
│ ├── stake.prv
│ ├── stake.pub
│ ├── tx.raw
│ ├── tx.signed
│ ├── vrf.prv
│ └── vrf.pub
├── pool-2
│ ├── cardano-node.log
│ ├── db
│ ├── dlg.cert
│ ├── faucet.prv
│ ├── genesis.json
│ ├── kes.prv
│ ├── kes.pub
│ ├── metadata.json
│ ├── node.config
│ ├── node.socket
│ ├── node.topology
│ ├── op.cert
│ ├── op.count
│ ├── op.prv
│ ├── op.pub
│ ├── pool.cert
│ ├── sink.prv
│ ├── sink.pub
│ ├── stake.cert
│ ├── stake.prv
│ ├── stake.pub
│ ├── tx.raw
│ ├── tx.signed
│ ├── vrf.prv
│ └── vrf.pub
├── wallets-b33cfce13ce1ac74
│ └── stake-pools.sqlite
├── cluster.log
└── wallet.log
The default log level for log files is Info.
Only Error level logs are shown on stdout during test execution. To
change this, set the *_MIN_SEVERITY
variables shown above.
Common Failures and Resolution
No More Wallets
If your test fails with something like:
user error (No more faucet wallet available in MVar!)
Generate more wallet mnemonics and populate the appropriate list in lib/wallet/src/Test/Integration/Faucet.hs
.
Generate new mnemonics with:
nix build .#cardano-wallet
# Size may vary depending on which array you need to add to.
./result/bin/cardano-wallet recovery-phrase generate --size 15
Mock servers
Use the cardano-wallet:mock-token-metadata-server
executable as a
mock server for asset metadata. See the --help
output for full
instructions.
Benchmarks
Database
$ cabal bench cardano-wallet:db
Restoration
Pre-requisites
-
Follow the pre-requisites from
integration
above -
(Optional) Install hp2pretty
$ cabal install hp2pretty
Test
Restoration benchmarks will catch up with the chain before running which can
take quite a long time in the case of mainnet
. For a better experience, make
sure your system isn't too far behind the tip before running.
$ cabal bench cardano-wallet:restore
Alternatively, one can specify a target network (by default, benchmarks run on testnet
):
$ cabal bench cardano-wallet:restore --benchmark-options "mainnet"
Also, it's interesting to look at heap consumption during the running of the benchmark:
$ cabal bench cardano-wallet:restore --benchmark-options "mainnet +RTS -h -RTS"
$ hp2pretty restore.hp
$ eog restore.svg
Code Coverage
Pre-requisites
- Follow the pre-requisites from
integration
above
Test
Running combined code coverage on all components is pretty easy. This generates code coverage reports in an HTML format as well as a short summary in the console. Note that, because code has to be compiled in a particular way to be "instrumentable" by the code coverage engine, it is recommended to run this command using another working directory (--builddir
option) so that one can easily switch between coverage testing and standard testing (faster to run):
$ cabal test all --enable-coverage --builddir .dist-coverage
Note that, integration tests are excluded from the basic coverage report because the cardano-wallet server runs in a separate process. It it still possible to combine coverage from various sources (see this article for some examples / details).
E2E Tests
See: README.
QA Schedule
See our Advice Process on Continuous Integration for information on which test is executed when.
Continuous Integration
TODO: Information about our continuous integration pipeline.
Release Process
- Create new page on cardano-wallet's wiki called
Release vYYYY-MM-DD
by copying contents of the previous releas. - Follow the
Release checklist
. Update progress or report obstacles on the thread.
Release checklist
Code Review Guidelines
Table of Content
As a Reviewer or Author
DO: Assume competence.
An author’s implementation or a reviewer’s recommendation may be due to the other party having different context than you. Start by asking questions to gain understanding.
DO: Provide rationale or context
Such as a best practices document, a style guide, or a design document. This can help others understand your decision or provide mentorship.
DO: Consider how comments may be interpreted.
Be mindful of the differing ways hyperbole, jokes, and emojis may be perceived.
e.g.:
Authors Don’t | Authors Do |
---|---|
I prefer short names so I’d rather not change this. Unless you make me? :) | Best practice suggests omitting obvious/generic terms. I’m not sure how to reconcile that advice with this request. |
DON’T: Criticize the person.
Instead, discuss the code. Even the perception that a comment is about a person (e.g., due to using “you” or “your”) distracts from the goal of improving the code.
e.g.:
Reviewers Don’t | Reviewers Do |
---|---|
Why are you using this approach? You’re adding unnecessary complexity. | This concurrency model appears to be adding complexity to the system without any visible performance benefit. |
DON’T: Use harsh language.
Code review comments with a negative tone are less likely to be useful. For example, prior research found very negative comments were considered useful by authors 57% of the time, while more-neutral comments were useful 79% of the time.
As a Reviewer
DO: Provide specific and actionable feedback.
If you don’t have specific advice, sometimes it’s helpful to ask for clarification on why the author made a decision.
e.g.:
Reviewers Don’t | Reviewers Do |
---|---|
I don’t understand this. | If this is an optimization, can you please add comments? |
DO: Clearly mark nitpicks and optional comments.
By using prefixes such as ‘Nit’ or ‘Optional’. This allows the author to better gauge the reviewer’s expectations.
As an Author
DO: Clarify code or reply to the reviewer’s comment.
In response to feedback, failing to do so can signal a lack of receptiveness to implementing improvements to the code.
e.g.
Authors Don’t | Authors Do |
---|---|
That makes sense in some cases butnot here. | I added a comment about why it’s implemented that way. |
DO: When disagreeing with feedback, explain the advantage of your approach.
In cases where you can’t reach consensus, bring the discussion on Slack with other peers from the team.
From an original source: https://testing.googleblog.com/2019/11/code-health-respectful-reviews-useful.html
Notes
Updating Dependencies
If you use Nix to manage build and runtime dependencies you can be confident that you will always have the correct versions for the branch you are on, and that these will be exactly the same as those versions used in CI.
It is possible to specify any git revision for the dependency and Nix will automatically build it -- unless it has already been built in a nix cache -- in which case the build result will be downloaded instead.
nix develop
The default nix develop
contains build tools, utilities and GHC
configured with a global package-db which matches cabal.project
. This
is defined in the devShells.default
attribute of flake.nix
.
nix flake lock
nix flake
manages the file flake.lock
.
Updating node backends
cardano-node
Haskell dependencies
To bump to a new version:
- In
cardano-wallet/cabal.project
, update the dependency revisions to match your chosen version ofcardano-node/cabal.project
. - Run
./nix/regenerate.sh
(or let Buildkite do it)
Jörmungandr
Follow the instructions in nix/jormungandr.nix
.
Upgrading the GHC version of cardano-wallet
Here is a reference PR that upgrades to GHC 8.10.7: https://github.com/cardano-foundation/cardano-wallet/pull/2969
WARNING: Updating haskell.nix and/or GHC changes a lot of the build environment. You should expect to spend time fixing breakages.
Process
- Update "with-compiler" in cabal.project:
diff --git a/cabal.project b/cabal.project
index 1ba2edf625..109534719a 100644
--- a/cabal.project
+++ b/cabal.project
@@ -39,7 +39,7 @@
--------------------------------------------------------------------------------
index-state: 2021-10-05T00:00:00Z
-with-compiler: ghc-8.10.5
+with-compiler: ghc-8.10.7
packages:
lib/wallet/
- Update haskell.nix
nix flake lock --update-input haskellNix
.
Troubleshooting
The following is a list of issues encountered so far while executing this process:
Compile-time error when building a package or dependency
For example:
src/Cardano/Config/Git/Rev.hs:33:35: error:
• Exception when trying to run compile-time code:
git: readCreateProcessWithExitCode: posix_spawnp: failed (Undefined error: 0)
Code: gitRevFromGit
• In the untyped splice: $(gitRevFromGit)
|
33 | fromGit = T.strip (T.pack $(gitRevFromGit))
In this case, the first guess is that git is not in the PATH when compiling the Template Haskell of cardano-config. The following line in our nix/haskell.nix file fixes this, adding a build-time dependency to our downstream dependencies (and one of our own too):
diff --git a/nix/haskell.nix b/nix/haskell.nix
index 8bb80f7e99..8ac227a865 100644
--- a/nix/haskell.nix
+++ b/nix/haskell.nix
@@ -302,6 +302,10 @@ let
pkg-set = haskell.mkStackPkgSet {
inherit stack-pkgs;
modules = [
# ...
+ {
+ packages.cardano-wallet.components.library.build-tools = [ pkgs.buildPackages.buildPackages.gitMinimal ];
+ packages.cardano-config.components.library.build-tools = [ pkgs.buildPackages.buildPackages.gitMinimal ];
+ }
];
};