TPA fully supports provisioning production clusters on AWS EC2.
To use the AWS API, you must:
The AMI user should at least have following set of permissions so tpaexec can use it to provision ec2 resources.
The service is physically subdivided into regions and availability zones. An availability zone is represented by a region code followed by a single letter, e.g., eu-west-1a (but that name may refer to different locations for different AWS accounts, and there is no way to coordinate the interpretation between accounts).
AWS regions are completely isolated from each other and share no resources. Availability zones within a region are physically separated, and logically mostly isolated, but are connected by low-latency links and are able to share certain networking resources.
All networking configuration in AWS happens in the context of a Virtual Private Cloud within a region. Within a VPC, you can create subnets that is tied to a specific availability zone, along with internet gateways, routing tables, and so on.
You can create any number of Security Groups to configure rules for what inbound and outbound traffic is permitted to instances (in terms of protocol, a destination port range, and a source or destination IP address range).
AWS EC2 offers a variety of instance types with different hardware configurations at different price/performance points. Within a subnet in a particular availability zone, you can create EC2 instances based on a distribution image known as an AMI, and attach one or more EBS volumes to provide persistent storage to the instance. You can SSH to the instances by registering an SSH public key.
Instances are always assigned a private IP address within their subnet. Depending on the subnet configuration, they may also be assigned an ephemeral public IP address (which is lost when the instance is shut down, and a different ephemeral IP is assigned when it is started again). You can instead assign a static region-specific routable IP address known as an Elastic IP to any instance.
For an instance to be reachable from the outside world, it must not only have a routable IP address, but the VPC's networking configuration (internet gateway, routing tables, security groups) must also align to permit access.
Here's a brief description of the AWS-specific settings that you can
tpaexec configure or define directly in config.yml.
You can specify one or more regions for the cluster to use with
--regions. TPA will generate the required vpc entries associated to each of
them and distribute locations into these regions evenly by using different
availability zones while possible.
regions are differents from
locations, each location belongs to a region
(and an availability zone inside this region).
regions are AWS specific
locations are cluster objects.
Note: When specifying multiple regions, you need to manually edit network configurations:
ec2_vpcentries must have non-overlaping cidr networks to allow use of AWS vpc peering. by default TPA will set all cidr to
10.33.0.0/16. See VPC for more informations.
locationmust be updated with
subnetthat match the
cidrthey belong to. See Subnets for more informations.
- TPA creates security groups with basic rules under
cluster_rulesand those need to be updated to match
ec2_vpccidr for each
subnetcidr. see Security groups for more informations.
- VPC peering must be setup manually before
tpaexec deploy. We recommand creating VPCs and required VPC peerings before running
tpaexec configureand using
vpc-idin config.yml. See VPC for more informations.
You must specify a VPC to use:
This is the default configuration, which creates a VPC named Test with the given CIDR if it does not exist, or uses the existing VPC otherwise.
To create a VPC, you must specify both the Name and the cidr. If you specify only a VPC Name, TPA will fail if a matching VPC does not exist.
If TPA creates a VPC,
tpaexec deprovision will attempt to remove it, but will leave any
pre-existing VPC alone. (Think twice before creating new VPCs, because
AWS has a single-digit default limit on the number of VPCs per account.)
If you need more fine-grained matching, or to specify different VPCs in different regions, you can use the expanded form:
You must specify an AMI to use:
You can add filter specifications for more precise matching:
tpaexec configure will select a suitable
for you based on the
This platform supports Debian 9 (stretch), RedHat Enterprise Linux 7, Rocky 8, Ubuntu 16.04 (Xenial), and SUSE Linux Enterprise Server 15.
Every instance must specify its subnet (in CIDR form, or as a subnet-xxx id). You may optionally specify the name and availability zone for each subnet that we create:
By default, we create a security group for the cluster. To use one or more existing security groups, set:
If you want to customise the rules in the default security group, set
This example permits ssh (port 22) from any address, and TCP connections on any port from specific IP ranges. (Note: from_port and to_port define a numeric range of ports, not a source and destination.)
If you set up custom rules or use existing security groups, you must ensure that instances in the cluster are allowed to communicate with each other as required (e.g., allow tcp/5432 for Postgres).
By default, we create internet gateways for every VPC, unless you set:
For more fine-grained control, you can set:
TPA requires access to an S3 bucket to provision an AWS cluster. This bucket is used to temporarily store files such as SSH host keys, but may also be used for other cluster data (such as backups).
By default, TPA will use an S3 bucket named
for any clusters you provision. (If the bucket does not exist, you will be asked to
confirm that you want TPA to create it for you.)
To use an existing S3 bucket instead, set
(You can also set
cluster_bucket: auto to accept the default bucket name without
the confirmation prompt.)
TPA will never remove any S3 buckets when you deprovision the cluster. To remove the bucket yourself, run:
The IAM user you are using to provision the instances must have read and write access to this bucket. During provisioning, tpaexec will provide instances with read-only access to the cluster_bucket through the instance profile.