|
|
#### _AWS_
|
|
|
- Upload public ssh key, creating a key pair (optionally use a new ssh key)
|
|
|
- ex `ssh-keygen`
|
|
|
- Create a User (recommend admin/**AdministratorAccess**) that will manage the overall infrastructure.
|
|
|
- Create a Bot user with **AmazonS3FullAccess** policy that will be used to Download/Upload to s3
|
|
|
- Each user is provided an **AWS_ACCESS_KEY** and **AWS_SECRET_ACCESS_KEY** and is needed later
|
|
|
- If given the **AdministratorAccess** managed policy, your access key will have all of the below policies by default
|
|
|
- User key must have the following **managed** policies:
|
|
|
- [ ] **AmazonEC2FullAccess** : Create and Destroy ec2 instances/amis
|
|
|
- [ ] **AmazonVPCFullAccess**: Create and Destroy vpcs
|
|
|
- [ ] **AmazonS3FullAccess**: Upload and Download from S3
|
|
|
- *Review if **all** s3 actions done by bot or not*
|
|
|
- Bot key must have the following **managed** policy:
|
|
|
- [ ] **AmazonS3FullAccess**: Upload and Download from S3
|
|
|
|
|
|
|
|
|
If you would like to sticky nameservers for multiple domains (aws uses many and changes them whenever a hosted zone is destroyed/created) you have 2 options. One using terraform and one not. Both require keeping a placeholder hosted zone with a reusable delegation set.
|
|
|
|
|
|
1. Using the aws cli (no terraform)
|
|
|
- Create a "placeholder" [hosted zone](https://console.aws.amazon.com/route53/v2/hostedzones#)
|
|
|
## Step 2/4
|
|
|
### AWS
|
|
|
1. Upload your public ssh key, creating a key pair (one noted in [Workstation](Workstation) section). Place this in `envs/your_env_folder/credentials.tf` assigning it to `aws_key_name`.
|
|
|
`variable "aws_key_name" { default = "YOUR_KEY_NAME" }`
|
|
|
|
|
|
Next you'll create 2 users accounts, one that will manage the infrastructure and a bot user that will handle uploading/downloading from s3 on the machines. This is so you dont place a key with elevated access on the server itself.
|
|
|
|
|
|
2. Each user/bot created is provided an **AWS_ACCESS_KEY** and **AWS_SECRET_ACCESS_KEY** and is needed in a moment, make note of all of them.
|
|
|
|
|
|
3. Create a User (recommend admin/**AdministratorAccess**) that will manage the overall infrastructure.
|
|
|
|
|
|
4. Create a User that will be a Bot user using only the **AmazonS3FullAccess** policy that will be used to Download/Upload to s3.
|
|
|
|
|
|
5. If the User is **given the AdministratorAccess** managed policy, your access key will have all of the below policies by default and can **skip to 7**
|
|
|
|
|
|
6. If **not using AdministratorAccess** User, it **must** have the following **managed** policies:
|
|
|
**AmazonEC2FullAccess** : Create and Destroy ec2 instances/amis
|
|
|
**AmazonVPCFullAccess**: Create and Destroy vpcs.
|
|
|
**AmazonS3FullAccess**: Upload and Download from S3.
|
|
|
- *TODO: Review if **all** s3 actions done by bot or not*
|
|
|
|
|
|
7. Bot user **must** have the following **managed** policy:
|
|
|
**AmazonS3FullAccess**: Upload and Download from S3.
|
|
|
|
|
|
8. Place the keys from step 3 and 4 into `envs/your_env_folder/credentials.tf`
|
|
|
Step 3 User account
|
|
|
`variable "aws_access_key" { default = "AWS_ACCESS_KEY"}`
|
|
|
`variable "aws_secret_key" { default = "AWS_SECRET_ACCESS_KEY"}`
|
|
|
Step 4 Bot account
|
|
|
`variable "aws_bot_access_key" { default = "AWS_ACCESS_KEY" }`
|
|
|
`variable "aws_bot_secret_key" { default = "AWS_SECRET_ACCESS_KEY" }`
|
|
|
|
|
|
9. TODO: Make notes how/where to create an AWS bucket and assign it to `aws_bucket_name`. Also research if they are region specific.
|
|
|
`variable "aws_bucket_name" { default = "your_bucket_name"}`
|
|
|
|
|
|
10. If you are only testing/creating a tempoary infrastructure, or completely new to AWS, you can continue onto the [Remote backend](#remote-backend-optional) or [Deploy](Deploy) section and come back to this later.
|
|
|
|
|
|
|
|
|
> Below steps assume a slight familiarity to AWS and is not required if only testing. What the following steps do is sticky a set of nameservers to a placeholder domain so you only need to modify any new domain's nameservers only once, persisting through `terraform destroy`.
|
|
|
|
|
|
> If you would like to sticky nameservers for multiple domains (aws uses many and changes them whenever a hosted zone is destroyed/created) you have 2 options. One using aws cli and the other terraform. Both require keeping a single placeholder hosted zone with a reusable delegation set. NOTE you are charged for each hosted_zone in aws every month (I believe its $.50$ cents each)
|
|
|
|
|
|
First option using the aws cli (no terraform, less complicated overall)
|
|
|
- Create a "placeholder" [hosted zone](https://console.aws.amazon.com/route53/v2/hostedzones#) (the domain does not need to exist) and place the name in `envs/your_env_folder/vars.tf` assigning it to `placeholder_hostzone`.
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
- Get the Hosted Zone ID (Adjust the table to get the full ID)
|
|
|
- With the aws cli tool (AWS Cloudshell works), enter the following command replacing YOUR_ZONE_ID with the Hosted Zone ID
|
|
|
- `aws route53 create-reusable-delegation-set --caller-reference="unique" --hosted-zone-id="YOUR_ZONE_ID"`
|
|
|
- Get the ID created from the command and fill in `variable "default_reusable_delegationset_id"`
|
|
|
|
|
|
2. Using terraform
|
|
|
- Enter a "placeholder" domain name into `variable "default_hostzone"` and return here after you've created your infrastructure
|
|
|
- After the entire infra is created, we have a placeholder hosted zone with a reusable delegation set we want to keep even after `terraform destroy`. To keep them we must remove them from terraforms state.
|
|
|
- First run `terraform show` and retrieve the ID from the resource `module.main.aws_route53_delegation_set.dset[0]` and fill in `variable "default_reusable_delegationset_id"`
|
|
|
- Then run the following two commands to remove both the zone and delegation set from state
|
|
|
- `terraform state rm module.main.aws_route53_delegation_set.dset[0]`
|
|
|
- `terraform state rm module.main.aws_route53_zone.default_host[0]`
|
|
|
- With the aws cli tool (AWS Cloudshell works), enter the following command replacing YOUR_ZONE_ID with the Hosted Zone ID:
|
|
|
`aws route53 create-reusable-delegation-set --caller-reference="unique" --hosted-zone-id="YOUR_ZONE_ID"`
|
|
|
- Get the ID created from the command and place it in `envs/your_env_folder/vars.tf` assigning it to `placeholder_reusable_delegationset_id`.
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_CREATED_FROM_COMMAND" }`
|
|
|
|
|
|
|
|
|
Second option using only terraform (no aws cli)
|
|
|
- Inside `envs/your_env_folder/vars.tf` enter a "placeholder" domain name into `placeholder_hostzone` and return here after you've created your infrastructure.
|
|
|
`variable "placeholder_hostzone" { default = "placeholder.com" }`
|
|
|
- After the entire infra is created, run `terraform show` and retrieve the ID from the resource `module.main.aws_route53_delegation_set.dset[0]`.
|
|
|
- Inside `envs/your_env_folder/vars.tf` fill in `placeholder_reusable_delegationset_id`.
|
|
|
`variable "placeholder_reusable_delegationset_id" { default = "ID_FOUND_FROM_TERRAFORM_SHOW" }`
|
|
|
- Then run the following two commands to remove both the zone and delegation set from state, thus not being destroyed on `terraform destroy`.
|
|
|
`terraform state rm module.main.aws_route53_delegation_set.dset[0]`
|
|
|
`terraform state rm module.main.aws_route53_zone.default_host[0]`
|
|
|
|
|
|
After using either method a placeholder hosted zone with 4 nameservers will be in route53 that we can point any domain in our registrars to and not worry about changing them again. Its rather convoluted compared to digital oceans `ns1.digitalocean.com...` convention but it gets the job done.
|
|
|
|
... | ... | @@ -37,14 +66,40 @@ If you would like to create your *own* `ns1.example.com` nameservers, we've perf |
|
|
|
|
|
<br>
|
|
|
|
|
|
#### _Digital Ocean_
|
|
|
- Upload public ssh key, creating a ssh fingerprint (optionally use a new ssh key)
|
|
|
- ex `ssh-keygen`
|
|
|
- Create a TODO (Token)[link] that will manage the overall infrastructure.
|
|
|
- Create a TODO (Space)[link] (s3 object storage) that will be used to Download/Upload to s3
|
|
|
- Spaces will provide a **SPACES_ACCESS_KEY** and **SPACES_SECRET_KEY** and will be needed later, compared to the singular digital ocean api token
|
|
|
- #### Review if there are permissions for digital oceans token.
|
|
|
---
|
|
|
|
|
|
<br>
|
|
|
|
|
|
### Digital Ocean
|
|
|
1. Upload your public ssh key (one noted in [Workstation](Workstation) section) to [your account](https://cloud.digitalocean.com/account/security), creating a ssh fingerprint . Place this in `envs/your_env_folder/credentials.tf` assigning it to `do_ssh_fingerprint`.
|
|
|
`variable "do_ssh_fingerprint" { default = "long_ssh_fingerprint" }`
|
|
|
|
|
|
Next you'll create a token and a set of keys. The Token will manage the infrastructure creating droplets/vpcs/dns and the keys are for digital oceans s3 object storage that will handle uploading/downloading files on the machines.
|
|
|
|
|
|
2. Spaces will provide a **SPACES_ACCESS_KEY** and **SPACES_SECRET_KEY** and will be needed later, compared to the singular digital ocean api token.
|
|
|
|
|
|
3. Go to the [API/token](https://cloud.digitalocean.com/account/api/tokens) section and click `Generate New Token` with Read & Write access that will manage the overall infrastructure. Place this token in `envs/your_env_folder/credentials.tf` assigning it to `do_token`.
|
|
|
`variable "do_token" { default = "YOUR_DIGITAL_OCEAN_TOKEN" }`
|
|
|
|
|
|
4. Create a [Space](https://cloud.digitalocean.com/spaces) (s3 object storage) that will be used to Download/Upload to s3. Note the name and the region (i.e. `nyc3`) it is created in, this is important as we don't want the object storage in a different region we intend to launch our machines. If a set of keys is not initially given, click `Manage Keys` then `Generate New Key`. Place these keys, name, and region in `envs/your_env_folder/credentials.tf` assigning it to `do_spaces_name`, `do_spaces_region`, `do_spaces_access_key` and `do_spaces_secret_key` respectively.
|
|
|
`variable "do_spaces_name" { default = "SPACES_NAME" }`
|
|
|
`variable "do_spaces_region" { default = "SPACES_REGION" }`
|
|
|
`variable "do_spaces_access_key" { default = "SPACES_ACCESS_KEY" }`
|
|
|
`variable "do_spaces_secret_key" { default = "SPACES_SECRET_KEY" }`
|
|
|
|
|
|
<br>
|
|
|
|
|
|
---
|
|
|
|
|
|
<br>
|
|
|
|
|
|
#### Remote backend (optional)
|
|
|
Now that you've configured your cloud providers and their s3 object storage, we can optionally store terraforms `terraform.tfstate` state file remotely instead of our local workstation. This allows the state file to be "backed up", allow cross-team collaboration, or even accessing it from another machine (with spaces access)
|
|
|
|
|
|
TODO: Explan remote backend for both AWS and Digital Ocean
|
|
|
|
|
|
<br>
|
|
|
|
|
|
---
|
|
|
|
|
|
Now that your [workstation](workstation) is setup, you've [configured your cloud provider](cloud-provider), it's time to [configure an external domain](domain) |
|
|
\ No newline at end of file |