Skip to content

Module Structure

terraform-aws-arc-s3

Latest Release Last Updated Terraform GitHub Actions

Quality gate

Known Vulnerabilities

Overview

SourceFuse AWS Reference Architecture (ARC) Terraform module for managing the s3 module.

Features

  • Manages S3 Buckets: Handles the creation, deletion, and maintenance of Amazon S3 (Simple Storage Service) buckets, which are containers for storing data in the cloud.
  • Supports Lifecycle Rules: Enables the setup and management of lifecycle rules that automate the transition of data between different storage classes and the deletion of objects after a specified period.
  • Configurable Bucket Policies and Access Controls: Allows for the configuration of bucket policies and access control lists (ACLs) to define permissions and manage access to the data stored in S3 buckets, ensuring data security and compliance.
  • Supports CORS and Website Configurations: Provides support for Cross-Origin Resource Sharing (CORS) configurations to manage cross-origin requests to the bucket's resources, and allows for configuring the bucket to host static websites, including setting index and error documents.
  • Cross-Region Replication: Facilitates the automatic, asynchronous copying of objects across different AWS regions to enhance data availability, disaster recovery, and data compliance requirements.

Introduction

SourceFuse's AWS Reference Architecture (ARC) Terraform module for managing Amazon S3 buckets using Terraform. It simplifies the creation, configuration, and management of S3 buckets by providing a set of predefined settings and options. The module supports advanced features such as bucket policies, access control lists (ACLs), lifecycle rules, and versioning. It also includes support for configuring Cross-Origin Resource Sharing (CORS) and cross-region replication for enhanced data availability and resilience. By leveraging this module, users can ensure consistent, secure, and efficient management of their S3 resources within an infrastructure-as-code (IaC) framework.

Usage

To see a full example, check out the main.tf file in the example folder.

1
2
3
4
5
6
7
8
9
module "s3" {
  source      = "sourcefuse/arc-s3/aws"
  version     = "0.0.1"

  name             = var.name
  acl              = var.acl
  lifecycle_config = local.lifecycle_config
  tags             = module.tags.tags
}

Requirements

Name Version
terraform ~> 1.3, < 2.0.0
aws ~> 5.0
random ~> 3.0

Providers

No providers.

Modules

Name Source Version
bucket ./modules/bucket n/a
replication ./modules/replication n/a

Resources

No resources.

Inputs

Name Description Type Default Required
acl Please node ACL is deprecated by AWS in favor of bucket policies.
Defaults to "private" for backwards compatibility,recommended to set s3_object_ownership to "BucketOwnerEnforced" instead.
string "private" no
availability_zone_id The ID of the availability zone. string "" no
bucket_logging_data (optional) Bucket logging data
object({
enable = optional(bool, false)
target_bucket = optional(string, null)
target_prefix = optional(string, null)
})
{
"enable": false,
"target_bucket": null,
"target_prefix": null
}
no
bucket_policy_doc (optional) S3 bucket Policy doc string null no
cors_configuration List of S3 bucket CORS configurations
list(object({
id = optional(string)
allowed_headers = optional(list(string))
allowed_methods = optional(list(string))
allowed_origins = optional(list(string))
expose_headers = optional(list(string))
max_age_seconds = optional(number)
}))
[] no
create_bucket (optional) Whether to create bucket bool true no
create_s3_directory_bucket Control the creation of the S3 directory bucket. Set to true to create the bucket, false to skip. bool false no
enable_versioning Whether to enable versioning for the bucket bool true no
event_notification_details (optional) S3 event notification details
object({
enabled = bool
lambda_list = optional(list(object({
lambda_function_arn = string
events = optional(list(string), ["s3:ObjectCreated:"])
filter_prefix = string
filter_suffix = string
})), [])

queue_list = optional(list(object({
queue_arn = string
events = optional(list(string), ["s3:ObjectCreated:
"])
})), [])

topic_list = optional(list(object({
topic_arn = string
events = optional(list(string), ["s3:ObjectCreated:*"])
})), [])

})
{
"enabled": false
}
no
force_destroy (Optional, Default:false) Boolean that indicates all objects (including any locked objects) should be deleted from the bucket when the bucket is destroyed so that the bucket can be destroyed without error. These objects are not recoverable. This only deletes objects when the bucket is destroyed, not when setting this parameter to true. Once this parameter is set to true, there must be a successful terraform apply run before a destroy is required to update this value in the resource state. Without a successful terraform apply after this parameter is set, this flag will have no effect. If setting this field in the same operation that would require replacing the bucket or destroying the bucket, this flag will not work. Additionally when importing a bucket, a successful terraform apply is required to set this value in state before it will take effect on a destroy operation. bool false no
lifecycle_config (optional) S3 Lifecycle configuration
object({
enabled = bool

expected_bucket_owner = optional(string, null)

rules = list(object({
id = string

expiration = optional(object({
date = optional(string, null)
days = optional(string, null)
expired_object_delete_marker = optional(bool, false)
}), null)
transition = optional(object({
date = string
days = number
storage_class = string
}), null)
noncurrent_version_expiration = optional(object({
newer_noncurrent_versions = number
noncurrent_days = number
}), null)
noncurrent_version_transition = optional(object({
newer_noncurrent_versions = number
noncurrent_days = number
storage_class = string
}), null)

filter = optional(object({
object_size_greater_than = string
object_size_less_than = string
prefix = string
tags = map(string)
}), null)


}))

})
{
"enabled": false,
"rules": []
}
no
name Bucket name. If provided, the bucket will be created with this name instead of generating the name from the context string n/a yes
object_lock_config (optional) Object Lock configuration
object({
mode = optional(string, "COMPLIANCE")
days = optional(number, 30)
})
{
"days": 30,
"mode": "COMPLIANCE"
}
no
object_lock_enabled (Optional, Forces new resource) Indicates whether this bucket has an Object Lock configuration enabled. Valid values are true or false. This argument is not supported in all regions or partitions. string false no
object_ownership (Optional) Object ownership. Valid values: BucketOwnerPreferred, ObjectWriter or BucketOwnerEnforced
BucketOwnerPreferred - Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control canned ACL.
ObjectWriter - Uploading account will own the object if the object is uploaded with the bucket-owner-full-control canned ACL.
BucketOwnerEnforced - Bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket.
string "BucketOwnerPreferred" no
public_access_config (Optional)
block_public_acls - Whether Amazon S3 should block public ACLs for this bucket. Defaults to false. Enabling this setting does not affect existing policies or ACLs. When set to true causes the following behavior:
PUT Bucket acl and PUT Object acl calls will fail if the specified ACL allows public access.
PUT Object calls will fail if the request includes an object ACL.
block_public_policy - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access.
ignore_public_acls - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access.
restrict_public_buckets - Whether Amazon S3 should block public bucket policies for this bucket. Defaults to false. Enabling this setting does not affect the existing bucket policy.
When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the specified bucket policy allows public access.
object({
block_public_acls = optional(bool, true)
block_public_policy = optional(bool, true)
ignore_public_acls = optional(bool, true)
restrict_public_buckets = optional(bool, true)
})
{
"block_public_acls": true,
"block_public_policy": true,
"ignore_public_acls": true,
"restrict_public_buckets": true
}
no
replication_config Replication configuration for S3 bucket
object({
enable = bool
role_name = optional(string, null) // if null , it will create new role

rules = list(object({
id = optional(string, null) // if null "${var.source_bucket_name}-rule-index"
filter = optional(list(object({
prefix = optional(string, null)
tags = optional(map(string), {})
})), [])

delete_marker_replication = optional(string, "Enabled")

source_selection_criteria = optional(object({
replica_modifications = optional(object({
status = optional(string, "Enabled")
}))
kms_key_id = optional(string, null)
sse_kms_encrypted_objects = optional(object({
status = optional(string, "Enabled")
}))
}))


destinations = list(object({
bucket = string
storage_class = optional(string, "STANDARD")
encryption_configuration = optional(object({
replica_kms_key_id = optional(string, null)
}))
}))
}))

})
{
"enable": false,
"role_name": null,
"rules": []
}
no
server_side_encryption_config_data (optional) S3 encryption details
object({
bucket_key_enabled = optional(bool, true)
sse_algorithm = optional(string, "AES256")
kms_master_key_id = optional(string, null)
})
{
"bucket_key_enabled": true,
"kms_master_key_id": null,
"sse_algorithm": "AES256"
}
no
tags Tags to assign the resources. map(string) {} no
transfer_acceleration_enabled (optional) Whether to enable Trasfer accelaration bool false no

Outputs

Name Description
bucket_arn Bucket ARN
bucket_id Bucket ID or Name
destination_buckets n/a
role_arn Role used to S3 replication

Development

Prerequisites

Configurations

  • Configure pre-commit hooks
    pre-commit install
    

Versioning

while Contributing or doing git commit please specify the breaking change in your commit message whether its major,minor or patch

For Example

git commit -m "your commit message #major"
By specifying this , it will bump the version and if you don't specify this in your commit message then by default it will consider patch and will bump that accordingly

Tests

  • Tests are available in test directory
  • Configure the dependencies
    1
    2
    3
    cd test/
    go mod init github.com/sourcefuse/terraform-aws-refarch-<module_name>
    go get github.com/gruntwork-io/terratest/modules/terraform
    
  • Now execute the test
    go test -timeout  30m
    

Authors

This project is authored by: - SourceFuse ARC Team