Skip to main content

Documentation Index

Fetch the complete documentation index at: https://anchoragedigital-mintlify-spelling-grammar-fix-1776644163.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

For full parser CLI documentation, see Parser CLI. This guide covers the practical steps to test your visualization locally.

Prerequisites

Before testing, ensure you have:
  • The parser_cli binary built (see installation instructions)
  • A raw transaction hex from your DApp or protocol
  • The expected output you want to verify

Quick testing workflow

1. Parse your transaction

Run the parser CLI with your transaction:
parser_cli --chain <chain> -t <your_transaction_hex> --output human
For example, with an Ethereum transaction:
parser_cli --chain ethereum -t 0x02f9... --output human

2. Verify the output

Check that the visualization:
  • Shows the correct action (swap, transfer, approval, etc.)
  • Displays accurate amounts and addresses
  • Uses appropriate labels that users will understand

3. Test the condensed view

Hardware wallets have limited screen space. Verify your visualization works in condensed mode:
parser_cli --chain <chain> -t <your_transaction_hex> --output human --condensed-only
The condensed view should show only the most critical information.

4. Check JSON output

For programmatic validation, use JSON output:
parser_cli --chain <chain> -t <your_transaction_hex> --output json
You can extract specific fields with jq:
parser_cli --chain ethereum -t <tx_hex> --output json | jq '.Fields'

Common issues

Transaction fails to parse

Cause: Incorrect chain type or malformed hex encoding. Solution: Verify the chain flag matches your transaction and that the hex is properly formatted (with or without 0x prefix, depending on chain conventions).

Missing protocol details

Cause: The parser does not recognize your contract or protocol. Solution: You may need to add a protocol-specific preset. See Creating Visualizations for patterns.

Output too verbose for hardware wallets

Cause: The condensed view includes too many fields. Solution: Review your visualization’s Condensed section in the PreviewLayout and reduce it to only the essential fields.

Amounts display incorrectly

Cause: Decimal handling or token metadata issues. Solution: Verify that token decimals are correctly applied. Use the amount_v2 field type with proper Amount and Abbreviation values.

Adding test fixtures

When your visualization is working, add a test fixture to ensure it does not regress.

1. Save your transaction

Create a fixture file in the appropriate chain directory:
src/chain_parsers/visualsign-<chain>/tests/fixtures/
Name the file descriptively, for example my_protocol_swap.input.

2. Add expected output

Create a corresponding expected output file with the same name but .expected extension. This captures the correct JSON output for comparison.

3. Write a test

Add a test that compares the parser output against your expected fixture:
#[test]
fn test_my_protocol_swap() {
    let input = include_str!("fixtures/my_protocol_swap.input");
    let expected = include_str!("fixtures/my_protocol_swap.expected");

    let result = parse_transaction(input);
    assert_eq!(result, expected);
}

4. Run tests

Verify your fixture passes:
cargo test -p visualsign-<chain> test_my_protocol

Property-based testing (Solana)

Solana IDL parsing includes proptest-based fuzz tests that verify crash safety and correctness across randomly generated IDLs and instruction data. These tests live in:
  • src/chain_parsers/visualsign-solana/tests/fuzz_idl_parsing.rs — parser-level fuzz and roundtrip tests
  • src/chain_parsers/visualsign-solana/tests/pipeline_integration.rs — full-pipeline integration tests
  • src/chain_parsers/visualsign-solana/tests/semantic_pipeline.rs — deterministic tests with real embedded IDLs
  • src/chain_parsers/visualsign-solana/tests/common/mod.rs — shared test helpers

Running proptest tests

# Run all tests (proptest + semantic + fuzz_idl_parsing)
cargo test -p visualsign-solana

# Default 256 cases per property
cargo test -p visualsign-solana --test fuzz_idl_parsing

# More iterations for deeper fuzzing
PROPTEST_CASES=5000 cargo test -p visualsign-solana --test fuzz_idl_parsing

# Semantic tests only (real embedded IDLs)
cargo test -p visualsign-solana --test semantic_pipeline

Running cargo fuzz targets (libFuzzer)

The fuzz/ directory contains libFuzzer targets that feed arbitrary bytes into the full visualsign-solana stack. These require a nightly toolchain and cargo-fuzz:
cargo install cargo-fuzz --locked

# Run a fuzz target for 30 seconds (same as CI)
cd src/chain_parsers/visualsign-solana/fuzz
cargo +nightly fuzz run fuzz_transaction_string -- -max_total_time=30
cargo +nightly fuzz run fuzz_versioned_transaction -- -max_total_time=30
Available targets:
TargetEntry pointWhat it exercises
fuzz_transaction_stringtransaction_string_to_visual_signbase64/hex decoding, transaction deserialization, IDL dispatch
fuzz_versioned_transactionversioned_transaction_to_visual_signbincode deserialization, versioned transaction path, address table lookups
When a crash is found, libFuzzer writes a reproducer to artifacts/. Reproduce it with:
cargo +nightly fuzz run <target> artifacts/<target>/crash-<hash>

Testing against real IDLs

The scripts/fuzz_all_idls.sh script runs fuzz tests against all embedded production IDLs in one pass:
./scripts/fuzz_all_idls.sh
You can also target a specific IDL:
IDL_FILE=/path/to/my_program.json cargo test -p visualsign-solana --test fuzz_idl_parsing real_idl

Roundtrip tests

A roundtrip test constructs an IDL and matching borsh-encoded instruction bytes, feeds them through the parser, and verifies the output matches expectations. “Roundtrip” refers to the encode-then-decode cycle: you know exactly what went in, so you can assert exactly what comes out. There are two kinds in use:
  • Concrete roundtrips (e.g., roundtrip_single_u64_arg) — Hand-crafted IDL JSON and hand-crafted byte payloads. These assert that specific parsed values match exactly (e.g., amount == 42). They serve as specification-by-example: each test documents one type scenario (no args, mixed primitives, Option<T>, Vec<T>, defined structs, multi-instruction dispatch).
  • Property-based roundtrips (e.g., fuzz_valid_data_always_parses_ok) — Randomly generated IDL shapes paired with machine-generated valid borsh bytes from arb_valid_instruction_bytes. These assert that parsing succeeds and the instruction name matches, without checking specific field values. They verify the parser’s contract holds across all type combinations, not just the hand-picked examples.
Both kinds complement each other: concrete roundtrips pin down known-good behavior, while property-based roundtrips explore the space of inputs you did not think to write by hand.

Adding a new test

  1. Write a strategy that generates the IDL shape you want to test (or use an existing one from solana_parser_fuzz_core::proptest)
  2. Add a proptest! test that exercises the parser with generated inputs
  3. Add a concrete roundtrip test for the same scenario to serve as specification-by-example
  4. Run the tests — if proptest finds a failure, it saves a regression seed to .proptest-regressions
  5. Commit the .proptest-regressions file so the failing case is reproduced in CI

CI workflows

Tests are triggered by adding labels to a PR:
LabelWorkflowWhat runs
proptestproptest.ymlcargo test -p visualsign-solana
fuzzfuzz.ymlBoth libFuzzer targets for 30 seconds each
cimain.ymlFull build, lint, and test suite
If a fuzz target crashes, the fuzz-failure label is added to the PR. If a proptest fails, the proptest-failure label is added. These labels are removed automatically on a clean run.

Validation checklist

Before submitting your visualization:
  • Parses correctly with --output human
  • Condensed view shows critical information only
  • Amounts and addresses are accurate
  • Labels are clear to non-technical users
  • Test fixture added and passing