autonomy deploy
Build or run agent service deployments.
This command group consists of a number of functionalities for building service deployments, run locally stored service deployments, and run service deployments defined in the on-chain protocol. See the appropriate subcommands for more information.
Options
--env-file FILE
- File containing environment variable mappings
Examples
If you have an .env
file in the working directory, autonomy deploy
will load the .env
file automatically. If the file is not present in the working directory you can provide the path to the file using --env-file
flag
autonomy deploy --env-file <path_to_dotenv> COMMAND [ARGS]
For loading the environment variables you can use a json
file as well. While using a json
file you can either use json
serialized strings like
{
"ALL_PARTICIPANTS": "[\"0x0000000000000000000000000000000000000000\"]"
}
Or the json
objects
{
"ALL_PARTICIPANTS": ["0x0000000000000000000000000000000000000000"]
}
The framework will handle both of these cases automatically.
autonomy deploy --env-file <path_to_json> COMMAND [ARGS]
autonomy deploy build
Build a service deployment.
This command must be executed within a service folder. That is, a folder containing the service configuration file (service.yaml
). The deployment will be created in the subfolder ./abci_build
.
Usage
autonomy deploy build [OPTIONS] [KEYS_FILE]
Options
--o PATH
- Path to output directory.
--n INTEGER
- Number of agents.
--docker
- Use docker as a backend.
--kubernetes
- Use kubernetes as a backend.
--dev
- Create development environment.
--log-level [INFO|DEBUG|WARNING|ERROR|CRITICAL]
- Logging level for runtime.
--packages-dir PATH
- Path to packages directory (Use with
dev
mode). --open-aea-dir PATH
- Path to open-aea repo (Use with
dev
mode). --open-autonomy-dir PATH
- Path to open-autonomy repo (Use with
dev
mode). --aev
- Apply environment variable when loading service config.
--use-hardhat
- Include a hardhat node in the deployment setup.
--use-acn
- Include an ACN node in the deployment setup.
-ltm, --local-tm-setup
- Use local tendermint chain setup.
--image-version TEXT
- Define runtime image version.
--agent-cpu-request FLOAT
- Set agent CPU usage request.
--agent-memory-request INTEGER
- Set agent memory usage request.
--agent-cpu-limit FLOAT
- Set agent CPU usage limit.
--agent-memory-limit INTEGER
- Set agent memory usage limit.
--remote
- To use a remote registry.
--local
- To use a local registry.
-p
- Ask for password interactively.
--help
- Show the help message and exit.
Examples
autonomy deploy build keys.json -ltm
Builds a service deployment using the keys stored in the file keys.json
and applying environment variables to the service configuration file. The deployment will be generated by default for as many agents as keys are stored in keys.json
. By default, the command searches for the file keys.json
, if no file name is provided.
If you are manually defining keys.json
make sure to include ledger
property for a key object. This will be used when writing key files to the deployment environment. If you don't specify the property ethereum
will be used as default ledger identifier, so if you're running an agent against a different ledger (eg. solana
) you'll have to specify this property manually like this
[
{
"address": "Solanaaddress",
"private_key": "Solanaprivatekey",
"ledger": "solana"
}
]
If the keys are generated using autonomy generate-key
command the ledger
property will be included automatically.
Private key security
When building deployments, you can use password protected privates keys (eg. generated using autonomy generate-key LEDGER --password PASSWORD
) to avoid exposing them.
Docker Compose
In a docker-compose
deployment you can just export OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD
environment variable before running the deployment like
export OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD=PASSWORD
autonomy deploy run --build-dir PATH_TO_BUILD_DIR
or
export OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD=PASSWORD
docker-compose -f PATH_TO_BUILD_DIR/docker-compose.yaml up
Kubernetes
In a kubernetes
deployment, you'll have to export the OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD
variable when building the deployment, not when running.
export OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD=PASSWORD
autonomy deploy build (...)
Customize resources
You can customize the resource values for the agent nodes using --agent-cpu-limit
, --agent-memory-limit
, --agent-cpu-request
and --agent-memory-request
flags or using AUTONOMY_AGENT_MEMORY_LIMIT
, AUTONOMY_AGENT_CPU_LIMIT
, AUTONOMY_AGENT_MEMORY_REQUEST
and AUTONOMY_AGENT_CPU_REQUEST
environment variables.
autonomy deploy build --agent-cpu-limit 1.0 --agent-memory-limit 2048 --agent-memory-request 1024
will produce agent nodes with following resource constrains
xyz_abci_0:
mem_reservation: 1024M
mem_limit: 2048M
cpus: 1.0
on docker-compose deployment and
resources:
limits:
cpu: '1.0'
memory: 2048Mi
requests:
cpu: '0.5'
memory: 1024Mi
on kubernetes deployment.
autonomy deploy run
Run a service deployment locally stored.
This command is a wrapper around docker-compose up
to run the service deployment. To execute this command you need to be located within the deployment environment subfolder (./abci_build
), or specify it with the option --build-dir
.
Usage
autonomy deploy run [OPTIONS]
Options
--build-dir PATH
- Path where the deployment is located.
--no-recreate
- If containers already exist, don not recreate them.
--remove-orphans
- Remove containers for services not defined in the Compose file.
--detach
- Run service in the background.
--help
- Show the help message and exit.
Examples
autonomy deploy run --build-dir ./abci_build
Runs the service deployment stored locally in the directory ./abci_build
.
To provide password for the private keys
export OPEN_AUTONOMY_PRIVATE_KEY_PASSWORD=PASSWORD
autonomy deploy run --build-dir ./abci_build
autonomy deploy from-token
Run a service deployment minted on-chain protocol.
This command allows to deploy services directly without having the need to explicitly fetch them locally (also known as "one-click deployment"). The command requires the TOKEN_ID
which can be checked in the Autonolas Protocol web app. See the mint a service on-chain guide for more information.
To understand how to use various chain profiles refer to Chain Selection
section on the autonomy mint
command documentation.
Usage
autonomy deploy from-token [OPTIONS] TOKEN_ID KEYS_FILE
Options
--n INTEGER
- Number of agents to include in the build.
--skip-image
- Skip building images.
--aev
- Apply environment variable when loading service config.
--docker
- Use docker as a backend.
--agent-cpu-request FLOAT
- Set agent CPU usage request.
--agent-memory-request INTEGER
- Set agent memory usage request.
--agent-cpu-limit FLOAT
- Set agent CPU usage limit.
--agent-memory-limit INTEGER
- Set agent memory usage limit.
--kubernetes
- Use kubernetes as a backend.
--no-deploy
- If set to true, the deployment won't run automatically
--use-ethereum
- Use
ethereum
chain to resolve the token id. --use-goerli
- Use
goerli
chain to resolve the token id. --use-custom-chain
- Use custom chain to resolve the token id.
--use-local
- Use local chain to resolve the token id.
--help
- Show the help message and exit.
Examples
autonomy deploy from-token --use-goerli 2 keys.json
Runs the service deployment registered with TOKEN_ID
=2 in the Görli on-chain protocol. The deployment will be run for as many agents as keys are defined in the keys.json
file.
autonomy deploy stop
Stop a deployment running in background, if you use --detach
flag on autonomy deploy run
or autonomy deploy from-token
the service will run in background. You can stop this service using autonomy deploy stop
command.
Usage
autonomy deploy stop --build-dir BUILD_DIR
Options:
--build-dir PATH
- Path to the deployment build directory.
--help
- Show this message and exit.