|
|
## Step 2/4
|
|
|
### AWS
|
|
|
1. Upload your public ssh key, creating a key pair (one noted in [Workstation](Workstation) section). Place this in `envs/your_env_folder/credentials.tf` assigning it to `aws_key_name`.
|
|
|
`variable "aws_key_name" { default = "YOUR_KEY_NAME" }`
|
|
|
|
|
|
`variable "aws_key_name" { default = "YOUR_KEY_NAME" }`
|
|
|
|
|
|
Next you'll create 2 users accounts, one that will manage the infrastructure and a bot user that will handle uploading/downloading from s3 on the machines. This is so you dont place a key with elevated access on the server itself.
|
|
|
|
|
|
2. Each user/bot created is provided an **AWS_ACCESS_KEY** and **AWS_SECRET_ACCESS_KEY** and is needed in a moment, make note of all of them.
|
|
|
> Each user/bot created is provided an **AWS_ACCESS_KEY** and **AWS_SECRET_ACCESS_KEY** and is needed in a moment, make note of all of them.
|
|
|
|
|
|
3. Create a User (recommend admin/**AdministratorAccess**) that will manage the overall infrastructure.
|
|
|
2. Create a User (recommended admin/**AdministratorAccess**) that will manage the overall infrastructure.
|
|
|
- If the User is **given the AdministratorAccess** managed policy, your access key will have all of the below policies by default and does not need further configuration.
|
|
|
|
|
|
4. Create a User that will be a Bot user using only the **AmazonS3FullAccess** policy that will be used to Download/Upload to s3.
|
|
|
- If **not using AdministratorAccess** User, it **must** have the following **managed** policies:
|
|
|
>**AmazonEC2FullAccess** : Create and Destroy ec2 instances/amis
|
|
|
>**AmazonVPCFullAccess**: Create and Destroy vpcs.
|
|
|
>**AmazonS3FullAccess**: Upload and Download from S3.
|
|
|
>*TODO: Review if **all** s3 actions done by bot or not*
|
|
|
|
|
|
5. If the User is **given the AdministratorAccess** managed policy, your access key will have all of the below policies by default and can **skip to 7**
|
|
|
3. Create a User that will be a Bot user that will download/upload to s3.
|
|
|
|
|
|
6. If **not using AdministratorAccess** User, it **must** have the following **managed** policies:
|
|
|
**AmazonEC2FullAccess** : Create and Destroy ec2 instances/amis
|
|
|
**AmazonVPCFullAccess**: Create and Destroy vpcs.
|
|
|
**AmazonS3FullAccess**: Upload and Download from S3.
|
|
|
- *TODO: Review if **all** s3 actions done by bot or not*
|
|
|
|
|
|
7. Bot user **must** have the following **managed** policy:
|
|
|
**AmazonS3FullAccess**: Upload and Download from S3.
|
|
|
- Bot user **must** have the following **managed** policy:
|
|
|
>**AmazonS3FullAccess**: Upload and Download from S3
|
|
|
|
|
|
8. Place the keys from step 3 and 4 into `envs/your_env_folder/credentials.tf`
|
|
|
Step 3 User account
|
|
|
`variable "aws_access_key" { default = "AWS_ACCESS_KEY"}`
|
|
|
`variable "aws_secret_key" { default = "AWS_SECRET_ACCESS_KEY"}`
|
|
|
Step 4 Bot account
|
|
|
`variable "aws_bot_access_key" { default = "AWS_ACCESS_KEY" }`
|
|
|
`variable "aws_bot_secret_key" { default = "AWS_SECRET_ACCESS_KEY" }`
|
|
|
4. Place the keys from step 2 and 3 into `envs/your_env_folder/credentials.tf`.
|
|
|
- Step 2 User account
|
|
|
`variable "aws_access_key" { default = "AWS_ACCESS_KEY"}`
|
|
|
`variable "aws_secret_key" { default = "AWS_SECRET_ACCESS_KEY"}`
|
|
|
|
|
|
- Step 3 Bot account
|
|
|
`variable "aws_bot_access_key" { default = "AWS_ACCESS_KEY" }`
|
|
|
`variable "aws_bot_secret_key" { default = "AWS_SECRET_ACCESS_KEY" }`
|
|
|
|
|
|
9. TODO: Make notes how/where to create an AWS bucket and assign it to `aws_bucket_name`. Also research if they are region specific.
|
|
|
`variable "aws_bucket_name" { default = "your_bucket_name"}`
|
|
|
5. Create an AWS S3 bucket and assign the name to `aws_bucket_name`.
|
|
|
*TODO:* Make notes how/where. Also research if they are region specific.
|
|
|
|
|
|
`variable "aws_bucket_name" { default = "your_bucket_name"}`
|
|
|
|
|
|
10. If you are only testing/creating a tempoary infrastructure, or completely new to AWS, you can continue onto the [Remote backend](#remote-backend-optional) or [Deploy](Deploy) section and come back to this later.
|
|
|
6. If you are only testing/creating a tempoary infrastructure, or completely new to AWS, you can continue onto the [Remote backend](#remote-backend-optional) or [Deploy](Deploy) section and come back to this later.
|
|
|
|
|
|
|
|
|
> Below steps assume a slight familiarity to AWS and is not required if only testing. What the following steps do is sticky a set of nameservers to a placeholder domain so you only need to modify any new domain's nameservers only once, persisting through `terraform destroy`.
|
|
|
|
|
|
> If you would like to sticky nameservers for multiple domains (aws uses many and changes them whenever a hosted zone is destroyed/created) you have 2 options. One using aws cli and the other terraform. Both require keeping a single placeholder hosted zone with a reusable delegation set. NOTE you are charged for each hosted_zone in aws every month (I believe its $.50$ cents each)
|
|
|
> If you would like to sticky nameservers for multiple domains (aws uses many and changes them whenever a hosted zone is destroyed/created) you have 2 options. One using aws cli and the other terraform. Both require keeping a single placeholder hosted zone with a reusable delegation set. NOTE you are charged for each hosted_zone in aws every month (I believe its $.50 cents each)
|
|
|
|
|
|
First option using the aws cli (no terraform, less complicated overall)
|
|
|
- Create a "placeholder" [hosted zone](https://console.aws.amazon.com/route53/v2/hostedzones#) (the domain does not need to exist) and place the name in `envs/your_env_folder/vars.tf` assigning it to `placeholder_hostzone`.
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
- Get the Hosted Zone ID (Adjust the table to get the full ID)
|
|
|
- With the aws cli tool (AWS Cloudshell works), enter the following command replacing YOUR_ZONE_ID with the Hosted Zone ID:
|
|
|
`aws route53 create-reusable-delegation-set --caller-reference="unique" --hosted-zone-id="YOUR_ZONE_ID"`
|
|
|
- Get the ID created from the command and place it in `envs/your_env_folder/vars.tf` assigning it to `placeholder_reusable_delegationset_id`.
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_CREATED_FROM_COMMAND" }`
|
|
|
- Create a "placeholder" [hosted zone](https://console.aws.amazon.com/route53/v2/hostedzones#) (the domain does not need to exist) and place the name in `envs/your_env_folder/vars.tf` assigning it to `placeholder_hostzone`.
|
|
|
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
|
|
|
- Get the Hosted Zone ID (adjust the table to get the full ID)
|
|
|
- With the aws cli tool (AWS Cloudshell works), enter the following command replacing YOUR_ZONE_ID with the Hosted Zone ID:
|
|
|
|
|
|
`aws route53 create-reusable-delegation-set --caller-reference="unique" --hosted-zone-id="YOUR_ZONE_ID"`
|
|
|
|
|
|
- Finally, get the ID created from the command and place it in `envs/your_env_folder/vars.tf` assigning it to `placeholder_reusable_delegationset_id`.
|
|
|
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_CREATED_FROM_COMMAND" }`
|
|
|
|
|
|
Second option using only terraform (no aws cli)
|
|
|
- Inside `envs/your_env_folder/vars.tf` enter a "placeholder" domain name into `placeholder_hostzone` and return here after you've created your infrastructure.
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
- After the entire infra is created, run `terraform show` and retrieve the ID from the resource `module.main.aws_route53_delegation_set.dset[0]`.
|
|
|
- Inside `envs/your_env_folder/vars.tf` fill in `placeholder_reusable_delegationset_id`.
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_FOUND_FROM_TERRAFORM_SHOW" }`
|
|
|
- Then run the following two commands to remove both the zone and delegation set from state, thus not being destroyed on `terraform destroy`.
|
|
|
`terraform state rm module.main.aws_route53_delegation_set.dset[0]`
|
|
|
`terraform state rm module.main.aws_route53_zone.default_host[0]`
|
|
|
- Inside `envs/your_env_folder/vars.tf` enter a "placeholder" domain name into `placeholder_hostzone` and return here after you've created your infrastructure.
|
|
|
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
|
|
|
- After the entire infra is created, run `terraform show` and retrieve the ID from the resource `module.main.aws_route53_delegation_set.dset[0]`.
|
|
|
- Inside `envs/your_env_folder/vars.tf` fill in `placeholder_reusable_delegationset_id`.
|
|
|
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_FOUND_FROM_TERRAFORM_SHOW" }`
|
|
|
|
|
|
- To finish, run the following two commands to remove both the zone and delegation set from state, thus not being destroyed on `terraform destroy`.
|
|
|
|
|
|
`terraform state rm module.main.aws_route53_delegation_set.dset[0]`
|
|
|
`terraform state rm module.main.aws_route53_zone.default_host[0]`
|
|
|
|
|
|
After using either method a placeholder hosted zone with 4 nameservers will be in route53 that we can point any domain in our registrars to and not worry about changing them again. Its rather convoluted compared to digital oceans `ns1.digitalocean.com...` convention but it gets the job done.
|
|
|
|
... | ... | @@ -72,20 +84,23 @@ If you would like to create your *own* `ns1.example.com` nameservers, we've perf |
|
|
|
|
|
### Digital Ocean
|
|
|
1. Upload your public ssh key (one noted in [Workstation](Workstation) section) to [your account](https://cloud.digitalocean.com/account/security), creating a ssh fingerprint . Place this in `envs/your_env_folder/credentials.tf` assigning it to `do_ssh_fingerprint`.
|
|
|
`variable "do_ssh_fingerprint" { default = "long_ssh_fingerprint" }`
|
|
|
|
|
|
`variable "do_ssh_fingerprint" { default = "long_ssh_fingerprint" }`
|
|
|
|
|
|
Next you'll create a token and a set of keys. The Token will manage the infrastructure creating droplets/vpcs/dns and the keys are for digital oceans s3 object storage that will handle uploading/downloading files on the machines.
|
|
|
|
|
|
2. Spaces will provide a **SPACES_ACCESS_KEY** and **SPACES_SECRET_KEY** and will be needed later, compared to the singular digital ocean api token.
|
|
|
> Spaces will provide a **SPACES_ACCESS_KEY** and **SPACES_SECRET_KEY** while the digital ocean api token is a singular token value.
|
|
|
|
|
|
3. Go to the [API/token](https://cloud.digitalocean.com/account/api/tokens) section and click `Generate New Token` with Read & Write access that will manage the overall infrastructure. Place this token in `envs/your_env_folder/credentials.tf` assigning it to `do_token`.
|
|
|
`variable "do_token" { default = "YOUR_DIGITAL_OCEAN_TOKEN" }`
|
|
|
2. Go to the [API/token](https://cloud.digitalocean.com/account/api/tokens) section and click `Generate New Token` with Read & Write access that will manage the overall infrastructure. Place this token in `envs/your_env_folder/credentials.tf` assigning it to `do_token`.
|
|
|
|
|
|
`variable "do_token" { default = "YOUR_DIGITAL_OCEAN_TOKEN" }`
|
|
|
|
|
|
4. Create a [Space](https://cloud.digitalocean.com/spaces) (s3 object storage) that will be used to Download/Upload to s3. Note the name and the region (i.e. `nyc3`) it is created in, this is important as we don't want the object storage in a different region we intend to launch our machines. If a set of keys is not initially given, click `Manage Keys` then `Generate New Key`. Place these keys, name, and region in `envs/your_env_folder/credentials.tf` assigning it to `do_spaces_name`, `do_spaces_region`, `do_spaces_access_key` and `do_spaces_secret_key` respectively.
|
|
|
`variable "do_spaces_name" { default = "SPACES_NAME" }`
|
|
|
`variable "do_spaces_region" { default = "SPACES_REGION" }`
|
|
|
`variable "do_spaces_access_key" { default = "SPACES_ACCESS_KEY" }`
|
|
|
`variable "do_spaces_secret_key" { default = "SPACES_SECRET_KEY" }`
|
|
|
3. Create a [Space](https://cloud.digitalocean.com/spaces) (s3 object storage) that will be used to download/upload to s3. Note the name and the region (i.e. `nyc3`) it is created in, this is important as we don't want the object storage in a different region we intend to launch our machines. If a set of keys is not initially given, click `Manage Keys` then `Generate New Key`. Place these keys, name, and region in `envs/your_env_folder/credentials.tf` assigning it to `do_spaces_name`, `do_spaces_region`, `do_spaces_access_key` and `do_spaces_secret_key` respectively.
|
|
|
|
|
|
`variable "do_spaces_name" { default = "SPACES_NAME" }`
|
|
|
`variable "do_spaces_region" { default = "SPACES_REGION" }`
|
|
|
`variable "do_spaces_access_key" { default = "SPACES_ACCESS_KEY" }`
|
|
|
`variable "do_spaces_secret_key" { default = "SPACES_SECRET_KEY" }`
|
|
|
|
|
|
<br>
|
|
|
|
... | ... | @@ -94,7 +109,7 @@ Next you'll create a token and a set of keys. The Token will manage the infrastr |
|
|
<br>
|
|
|
|
|
|
#### Remote backend (optional)
|
|
|
Now that you've configured your cloud providers and their s3 object storage, we can optionally store terraforms `terraform.tfstate` state file remotely instead of our local workstation. This allows the state file to be "backed up", allow cross-team collaboration, or even accessing it from another machine (with spaces access)
|
|
|
Now that you've configured your cloud providers and their s3 object storage, we can optionally store terraforms `terraform.tfstate` state file remotely instead of our local workstation. This keeps the state file remotely in the s3 bucket allowing cross-team collaboration and accessing it from another machine (that has s3 access)
|
|
|
|
|
|
TODO: Explan remote backend for both AWS and Digital Ocean
|
|
|
|
... | ... | @@ -102,4 +117,15 @@ TODO: Explan remote backend for both AWS and Digital Ocean |
|
|
|
|
|
---
|
|
|
|
|
|
Now that your [workstation](workstation) is setup, you've [configured your cloud provider](cloud-provider), it's time to [configure an external domain](domain) |
|
|
\ No newline at end of file |
|
|
<br>
|
|
|
|
|
|
Now that your [workstation](workstation) is setup, you've [configured your cloud provider](cloud-provider), it's time to [configure an external domain](domain).
|
|
|
|
|
|
<br>
|
|
|
|
|
|
---
|
|
|
|
|
|
- 1/4 ~~[Setup workstation](workstation)~~
|
|
|
- 2/4 ~~[Configure a cloud provider](cloud-provider)~~ <- Current page
|
|
|
- 3/4 [Configure an external domain](domain)
|
|
|
- 4/4 [Adjust settings and deploy](deploy) |
|
|
\ No newline at end of file |