Skip to content

Open Autonomy CLI tool

open-autonomy is a collection of tools from the Open Autonomy framework packaged into a CLI tool.

Deploy

$ autonomy deploy
Usage: autonomy deploy [OPTIONS] COMMAND [ARGS]...
  Deploy an AEA project.
Options:
  --help  Show this message and exit.
Commands:
  build  Build the agent and its components.

Build tools.

  1. Service Deployment
Usage: autonomy deploy build [OPTIONS] [KEYS_FILE]

  Build deployment setup for n agents.

Options:
  --o PATH                        Path to output dir.
  --n INTEGER                     Number of agents.
  --docker                        Use docker as a backend.
  --kubernetes                    Use kubernetes as a backend.
  --dev                           Create development environment.
  --force                         Remove existing build and overwrite with new
                                  one.
  --log-level [INFO|DEBUG|WARNING|ERROR|CRITICAL]
                                  Logging level for runtime.
  --packages-dir PATH             Path to packages dir (Use with dev mode)
  --open-aea-dir PATH             Path to open-aea repo (Use with dev mode)
  --open-autonomy-dir PATH        Path to open-autonomy repo (Use with dev
                                  mode)
  --remote                        To use a remote registry.
  --local                         To use a local registry.
  -p                              Ask for password interactively
  --password PASSWORD             Set password for key encryption/decryption
  --help                          Show this message and exit.

To create an environment you'll need a service and a file containing keys with funds for the chain you want to use. The necessary images will be built during the build process.

PUBLIC_ID_OR_HASH refers to the public id of the service, eg. valory/oracle_hardhat. KEYS_FILE refers the the file containing funded keys for the agent instances.

Example keys file

[
    {
        "address": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266",
        "private_key": "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"
    },
    {
        "address": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8",
        "private_key": "0x59c6995e998f97a5a0044966f0945389dc9e86dae88c7a8412f4603b6b78690d"
    },
    {
        "address": "0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC",
        "private_key": "0x5de4111afa1a4b94908f83103eb1f1706367c2e68ca870fc3fb9a804cdab365a"
    },
    {
        "address": "0x90F79bf6EB2c4f870365E785982E1f101E93b906",
        "private_key": "0x7c852118294e51e653712a81e05800f419141751be58f605c371e15141b007a6"
    }
]

These keys can be used for local deployments if you're using the default hardhat image for the local chain. Copy and paste this into keys.json file and run following command

# fetch a service
$ autonomy fetch valory/oracle_hardhat:0.1.0:bafybeihjmdctof4idvr6r62qealqt7y4vgrnqmwhr73n3kliig3waeg6ce  --service
$ cd oracle_hardhat
# create a docker deployment
$ autonomy deploy build keys.json

This will create a deployment environment with following directory structure

abci_build/
├── agent_keys
│   ├── agent_0
│   ├── agent_1
│   ├── agent_2
│   └── agent_3
├── nodes
│   ├── node0
│   ├── node1
│   ├── node2
│   └── node3
├── persistent_data
│   ├── benchmarks
│   ├── logs
│   ├── tm_state
│   └── venvs
└── docker-compose.yaml

To run this deployment go to the abci_build and run docker-compose up.

  1. Deployment images
Usage: autonomy build-image [OPTIONS] [PUBLIC_ID_OR_HASH]

  Build image using skaffold.

Options:
  --service-dir PATH  Path to service dir.
  --pull              Pull latest dependencies.
  --help              Show this message and exit.
# To build an image, navigate to the service dir and run
$ autonomy build-image

This will create and image with label valory/open-autonomy-open-aea:oracle-0.1.0. Images generated by this command will follow valory/open-autonomy-open-aea:app_name-version format for labels.

If you just want to build image for a specific agent without a fetching service run

$ autonomy build-image PUBLIC_ID
  1. Run deployments
Usage: autonomy deploy run [OPTIONS]

  Run deployment.

Options:
  --build-dir PATH
  --no-recreate     If containers already exist, dont recreate them.
  --remove-orphans  Remove containers for services not defined in the Compose
                    file.
  --help            Show this message and exit.

autonomy deploy run is a wrapper around docker-compose up command to run the service deployments. To run deployments, navigate to the build directory and run

$ autonomy deploy run
  1. One click deployments

If you have a service registered on-chain you can deploy the services directly using token IDs generated after the minting the service component. Refer here for more information.

Usage: autonomy deploy from-token [OPTIONS] TOKEN_ID KEYS_FILE

  Run service deployment.

Options:
  --rpc TEXT      Custom RPC URL
  --sca TEXT      Service contract address for custom RPC URL.
  --n INTEGER     Number of agents to include in the build.
  --skip-images   Skip building images.
  --use-goerli    Use goerli chain to resolve the token id.
  --use-ethereum  Use ethereum chain to resolve the token id.
  --use-staging   Use staging chain to resolve the token id.
  --remote        To use a remote registry.
  --local         To use a local registry.
  --help          Show this message and exit.

To run a deployment for the on-chain service you'll need the token id for the minted service and funded keys. Save the keys in the keys.json and run

$ autonomy deploy from-token ON_SERVICE_TOKEN_ID keys.json

Currently we support Autonolas staging chain, Goerli testnet and Ethereum mainnet for resolving on-chain token ids, but if you have a contract deployed on some other chain you can provide the RPC URL to the chain and contract address using --rpc and --sca.

Replay

Replay tools can be used to re-run the agents execution using data dumps from previous runs.

Note: Replay only works for deployments which were ran in dev mode

Usage: autonomy replay [OPTIONS] COMMAND [ARGS]...
  Replay tools.
Options:
  --help  Show this message and exit.
Commands:
  agent       Agent runner.
  tendermint  Tendermint runner.

agent

Usage: autonomy replay agent [OPTIONS] AGENT
  Agent runner.
Options:
  --build PATH     Path to build dir.
  --registry PATH  Path to registry folder.
  --help           Show this message and exit.

tendermint

Usage: autonomy replay tendermint [OPTIONS]
  Tendermint runner.
Options:
  --build PATH  Path to build directory.
  --help        Show this message and exit.

Replay agents runs from Tendermint dumps

  1. Copy and paste the following Makefile into your local environment:

.PHONY: run-hardhat
run-hardhat:
  docker run -p 8545:8545 -it valory/open-autonomy-hardhat:0.1.0

# if you get following error
# PermissionError: [Errno 13] Permission denied: '/open-aea/build/bdist.linux-x86_64/wheel'
# or similar to PermissionError: [Errno 13] Permission denied: /**/build
# remove build directory from the folder that you got error for
# for example here it should be /path/to/open-aea/repo/build
.PHONY: run-oracle-dev
run-oracle-dev:
  if [ "${OPEN_AEA_REPO_DIR}" = "" ];\
  then\
    echo "Please ensure you have set the environment variable 'OPEN_AEA_REPO_DIR'"
    exit 1
  fi
  if [ "$(shell ls ${OPEN_AEA_REPO_DIR}/build)" != "" ];\
  then \
    echo "Please remove ${OPEN_AEA_REPO_DIR}/build manually."
    exit 1
  fi

  autonomy build-image valory/oracle_hardhat --dev && \
    autonomy deploy build valory/oracle_hardhat deployments/keys/hardhat_keys.json --force --dev && \
    make run-deploy

.PHONY: run-deploy
run-deploy:
  if [ "${PLATFORM_STR}" = "Linux" ];\
  then\
    mkdir -p abci_build/persistent_data/logs
    mkdir -p abci_build/persistent_data/venvs
    sudo chown -R 1000:1000 -R abci_build/persistent_data/logs
    sudo chown -R 1000:1000 -R abci_build/persistent_data/venvs
  fi
  if [ "${DEPLOYMENT_TYPE}" = "docker-compose" ];\
  then\
    cd abci_build/ &&  \
    docker-compose up --force-recreate -t 600 --remove-orphans
    exit 0
  fi
  if [ "${DEPLOYMENT_TYPE}" = "kubernetes" ];\
  then\
    kubectl create ns ${VERSION}|| (echo "failed to deploy to namespace already existing!" && exit 0)
    kubectl create secret generic regcred \
          --from-file=.dockerconfigjson=/home/$(shell whoami)/.docker/config.json \
          --type=kubernetes.io/dockerconfigjson -n ${VERSION} || (echo "failed to create secret" && exit 1)
    cd abci_build/ && \
      kubectl apply -f build.yaml -n ${VERSION} && \
      kubectl apply -f agent_keys/ -n ${VERSION} && \
      exit 0
  fi
  echo "Please ensure you have set the environment variable 'DEPLOYMENT_TYPE'"
  exit 1
Then define the relevant environment variables.

  1. Run your preferred app in dev mode, for example run oracle in dev using make run-oracle-dev and in a separate terminal run make run-hardhat.

  2. Wait until at least one reset (reset_and_pause round) has occurred, because the Tendermint server will only dump Tendermint data on resets. Once you have a data dump stop the app.

  3. Run autonomy replay tendermint . This will spawn a tendermint network with the available dumps.

  4. Now you can run replays for particular agents using autonomy replay agent AGENT_ID. AGENT_ID is a number between 0 and the number of available agents -1. E.g. autonomy replay agent 0 will run the replay for the first agent.

Analyse

ABCI apps

Usage: autonomy analyse abci [OPTIONS] COMMAND [ARGS]...
  Analyse ABCI apps.
Options:
  --help  Show this message and exit.
Commands:
  check-app-specs     Check abci app specs.
  docstrings          Analyse ABCI docstring definitions.
  generate-app-specs  Generate abci app specs.
  logs                Parse logs.

Generate ABCI App specifications

Usage: autonomy analyse abci generate-app-specs [OPTIONS] APP_CLASS OUTPUT_FILE
  Generate abci app specs.
Options:
  --mermaid  Mermaid file.
  --yaml     Yaml file.
  --json     Json file.
  --help     Show this message and exit.

Check ABCI App specifications

Usage: autonomy analyse abci check-app-specs [OPTIONS]
  Check abci app specs.
Options:
  --check-all          Check all available definitions.
  --packages-dir PATH  Path to packages directory; Use with `--check-all` flag
  --yaml               Yaml file.
  --json               Json file.
  --app_class TEXT     Dotted path to app definition class.
  --infile PATH        Path to input file.
  --help               Show this message and exit.

Check ABCI app docstrings

Usage: autonomy analyse abci docstrings [OPTIONS] [PACKAGES_DIR]
  Analyse ABCI docstring definitions.
Options:
  --check
  --help   Show this message and exit.

Parse logs from a deployment

Usage: autonomy analyse abci logs [OPTIONS] FILE
  Parse logs.
Options:
  --help  Show this message and exit.

benchmarks

Usage: autonomy analyse benchmarks [OPTIONS] PATH
  Benchmark Aggregator.
Options:
  -b, --block-type [local|consensus|total|all]
  -d, --period INTEGER
  -o, --output FILE
  --help                          Show this message and exit.

Aggregating results from deployments.

To use this tool you'll need benchmark data generated from agent runtime. To generate benchmark data run

$ autonomy deploy build PATH_RO_KEYS --dev

By default this will create a 4 agent runtime where you can wait until all 4 agents are at the end of the first period (you can wait for more periods if you want) and then you can stop the runtime. The data will be stored in abci_build/persistent_data/benchmarks folder.

Run deployment using

$ cd abci_build/
$ docker-compose up

You can use this tool to aggregate this data.

autonomy analyse benchmarks abci_build/persistent_data/benchmarks

By default tool will generate output for all periods but you can specify which period to generate output for, same goes for block types as well.

Fetch

Usage: autonomy fetch [OPTIONS] PUBLIC_ID_OR_HASH

  Fetch an agent from the registry.

Options:
  --remote      To use a remote registry.
  --local       To use a local registry.
  --alias TEXT  Provide a local alias for the agent.
  --agent       Provide a local alias for the agent.
  --service     Provide a local alias for the agent.
  --help        Show this message and exit.

Example

autonomy fetch --local --alias oracle valory/oracle:0.1.0

Run

Usage: autonomy run [OPTIONS]

  Run the agent. Available for docker compose deployments only, not for kubernetes deployments.

Options:
  -p                          Ask for password interactively
  --password PASSWORD         Set password for key encryption/decryption
  --connections TEXT          The connection names to use for running the
                              agent. Must be declared in the agent's
                              configuration file.
  --env PATH                  Specify an environment file (default: .env)
  --install-deps              Install all the dependencies before running the
                              agent.
  --profiling INTEGER         Enable profiling, print profiling every amount
                              of seconds
  --memray                    Enable memray tracing, create a bin file with
                              the memory dump
  --exclude-connections TEXT  The connection names to disable for running the
                              agent. Must be declared in the agent's
                              configuration file.
  --aev                       Populate Agent configs from Environment
                              variables.
  --help                      Show this message and exit.

Example

autonomy run --aev

Scaffold

The scaffold tool lets users generate boilerplate code that acts as a template to speed-up the development of packages.

Usage: autonomy scaffold [OPTIONS] COMMAND [ARGS]...

  Scaffold a package for the agent.

Options:
  -tlr, --to-local-registry  Scaffold skill inside a local registry.
  --with-symlinks            Add symlinks from vendor to non-vendor and
                             packages to vendor folders.
  --help                     Show this message and exit.

Commands:
  connection              Add a connection scaffolding to the configuration file and agent.
  contract                Add a connection scaffolding to the configuration file and agent.
  decision-maker-handler  Add a decision maker scaffolding to the configuration file and agent.
  error-handler           Add an error scaffolding to the configuration file and agent.
  fsm                     Add an ABCI skill scaffolding from an FSM specification.
  protocol                Add a protocol scaffolding to the configuration file and agent.
  skill                   Add a skill scaffolding to the configuration file and agent.

Example

autonomy scaffold connection my_connection
Back to top