sábado, 11 de mayo de 2024

How to install localstack with docker and play with terraform


First things first:

# What is LocalStack?

[LocalStack](https://www.localstack.cloud) is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations, or just beginning to learn about AWS services, LocalStack helps speed up and simplify your testing and development workflow.

LocalStack support many AWS [services](https://docs.localstack.cloud/user-guide/aws/feature-coverage/).

**Note:** The information is volatile in **LocalStack Community Edition.** Persistence is available in the **pro edition.**

Let's create our test folder:

```bash
$ mkdir -p terraform-test
$ cd terraform-test
``` 



## Recommended Tools

**Create a virtual environment**
```bash
python -m venv venv
```

**Activate venv**
```bash
$ source venv/bin/activate
```

**Install awslocal and tflocal**
```bash
$ pip install awscli-local
$ pip install terraform-local
```

```bash
$ docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack
```

Create the provider.tf and configure the endpoints:

```
provider "aws" {
  access_key                  = "fake-access-key"
  secret_key                  = "fake-secret-key"
  region                      = "us-east-1"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true
  s3_use_path_style           = true  

  endpoints {
    apigateway     = "http://localhost:4566"
    apigatewayv2   = "http://localhost:4566"
    cloudformation = "http://localhost:4566"
    cloudwatch     = "http://localhost:4566"
    dynamodb       = "http://localhost:4566"
    ec2            = "http://localhost:4566"
    es             = "http://localhost:4566"
    elasticache    = "http://localhost:4566"
    firehose       = "http://localhost:4566"
    iam            = "http://localhost:4566"
    kinesis        = "http://localhost:4566"
    keyspaces      = "http://localhost:4566"
    lambda         = "http://localhost:4566"
    rds            = "http://localhost:4566"
    redshift       = "http://localhost:4566"
    route53        = "http://localhost:4566"
    s3             = "http://localhost:4566"
    s3api          = "http://localhost:4566"
    secretsmanager = "http://localhost:4566"
    ses            = "http://localhost:4566"
    sns            = "http://localhost:4566"
    sqs            = "http://localhost:4566"
    ssm            = "http://localhost:4566"
    stepfunctions  = "http://localhost:4566"
    sts            = "http://localhost:4566"
    events         = "http://localhost:4566"
    scheduler      = "http://localhost:4566"
    opensearch     = "http://localhost:4566"
  }
}

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.47.0"
    }
  }
}
```

Now  lets create a bucket, create a bucket.tf file:

```
resource "aws_s3_bucket" "bucket" {
  bucket = "your-bucket-name"

  tags = merge({
    Name        = "Your bucket name"
    Project     = "My example project"
    Environment = "Dev"
  })
}
```

Now run tflocal:
```bash
$ tflocal init
```
output:
```bash
Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 5.47.0"...
- Installing hashicorp/aws v5.47.0...
- Installed hashicorp/aws v5.47.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
```

```bash
$ tflocal validate
```

ouput:
```bash
Success! The configuration is valid.
```

```bash
$ tflocal plan
```

output:
```bash
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.bucket will be created
  + resource "aws_s3_bucket" "bucket" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = "my-test-bucket"
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Environment" = "Test Environment"
          + "Name"        = "Bucket for test"
          + "Project"     = "Test Project"
        }
      + tags_all                    = {
          + "Environment" = "Test Environment"
          + "Name"        = "Bucket for test"
          + "Project"     = "Test Project"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Warning: Invalid Attribute Combination
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on provider.tf line 1, in provider "aws":
│    1: provider "aws" {
│ 
│ Only one of the following attributes should be set: "endpoints[0].s3", "endpoints[0].s3api"
│ 
│ This will be an error in a future release.
╵
╷
│ Warning: AWS account ID not found for provider
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on provider.tf line 1, in provider "aws":
│    1: provider "aws" {
│ 
│ See https://registry.terraform.io/providers/hashicorp/aws/latest/docs#skip_requesting_account_id for implications.
╵

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply"
now.

```

```bash
$ tflocal apply
```

output:
```bash
...
 Enter a value: yes

aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 0s [id=my-test-bucket]
╷
│ Warning: Invalid Attribute Combination
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on provider.tf line 1, in provider "aws":
│    1: provider "aws" {
│ 
│ Only one of the following attributes should be set: "endpoints[0].s3", "endpoints[0].s3api"
│ 
│ This will be an error in a future release.
╵
╷
│ Warning: AWS account ID not found for provider
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on provider.tf line 1, in provider "aws":
│    1: provider "aws" {
│ 
│ See https://registry.terraform.io/providers/hashicorp/aws/latest/docs#skip_requesting_account_id for implications.
╵

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

``` 

**Verify if the new bucket has been created**
```bash
$ awslocal s3 ls
```
output
```bash
2022-02-11 11:39:31 my-test-bucket
```

**Upload a test file**
* Create a test file
```bash
$ touch file-test.txt
```
* Upload a test file
```bash
$ awslocal s3 cp file-test.txt s3://my-test-bucket/
```
output:
```bash
upload: ./file-test.txt to s3://my-test-bucket/file-test.txt
```

* Check the file in the bucket
```bash
$ awslocal s3 ls s3://my-test-bucket/
```
output:
```bash
2022-02-11 11:40:11           0 file-test.txt
```

## Tip 
You can view the files in the "bucket", open your browser and type the next url.
[http://localhost:4566/my-test-bucket/file-test.txt](http://localhost:4566/my-test-bucket/file-test.txt)

**Note:** Read about the services supported in **LocalStack Community Edition.** For example, you can't create a Network Balancer 😔.

No hay comentarios.: