Don't set aws variables at all, if not specified. DRY. Refactoring.

This commit is contained in:
Troy Kinsella 2018-10-26 12:51:51 -07:00
parent 1fc48aadfa
commit 9e3af13449
13 changed files with 85 additions and 203 deletions

View file

@ -1,34 +0,0 @@
## Welcome!
We're so glad you're thinking about contributing to an 18F open source project! If you're unsure about anything, just ask -- or submit the issue or pull request anyway. The worst that can happen is you'll be politely asked to change something. We love all friendly contributions.
We want to ensure a welcoming environment for all of our projects. Our staff follow the [18F Code of Conduct](https://github.com/18F/code-of-conduct/blob/master/code-of-conduct.md) and all contributors should do the same.
We encourage you to read this project's CONTRIBUTING policy (you are here), its [LICENSE](LICENSE.md), and its [README](README.md).
If you have any questions or want to read more, check out the [18F Open Source Policy GitHub repository]( https://github.com/18f/open-source-policy), or just [shoot us an email](mailto:18f@gsa.gov).
## Development
Requires [Docker](https://www.docker.com/).
1. Run `cp config.example.json config.json`.
1. Modify `config.json`.
* See [the instructions for getting your AWS credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html#cli-signup).
* Exclude the `s3://` prefix/protocol for `bucket`.
1. Run `./test/out </full/path/to/dir>`.
1. Run `./test/in </full/path/to/dir>`.
1. Run `./test/check`.
Every tag and branch created on this repository is automatically built on [Docker Hub](https://hub.docker.com/r/18fgsa/s3-resource-simple/).
## Public domain
This project is in the public domain within the United States, and
copyright and related rights in the work worldwide are waived through
the [CC0 1.0 Universal public domain dedication](https://creativecommons.org/publicdomain/zero/1.0/).
All contributions to this project will be released under the CC0
dedication. By submitting a pull request, you are agreeing to comply
with this waiver of copyright interest.

View file

@ -8,24 +8,30 @@ Include the following in your Pipeline YAML file, replacing the values in the an
```yaml
resource_types:
- name: <resource type name>
- name: s3-bucket
type: docker-image
source:
repository: 18fgsa/s3-resource-simple
resources:
- name: <resource name>
type: <resource type name>
- name: s3
type: s3-bucket
source:
access_key_id: {{aws-access-key}}
secret_access_key: {{aws-secret-key}}
bucket: {{aws-bucket}}
path: [<optional>, use to sync to a specific path of the bucket instead of root of bucket]
options: [<optional, see note below>]
access_key_id: ((aws_access_key)) # Optional
secret_access_key: ((aws_secret_key)) # Optional
bucket: my-bucket
region: us-east-1 # Optional
jobs:
- name: <job name>
- name: publish-files
plan:
- <some Resource or Task that outputs files>
- put: <resource name>
- task: Generate some files
output_mapping:
files: s3-upload
- put: s3
params:
prefix: /my-s3-dir # Optional
source_dir: s3-upload
```
## AWS Credentials
@ -34,7 +40,10 @@ The `access_key_id` and `secret_access_key` are optional and if not provided the
## Options
The `options` parameter is synonymous with the options that `aws cli` accepts for `sync`. Please see [S3 Sync Options](http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html#options) and pay special attention to the [Use of Exclude and Include Filters](http://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters).
The `options` parameter is synonymous with the options that `aws cli` accepts for `sync`.
Please see [S3 Sync Options](http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html#options)
and pay special attention to the
[Use of Exclude and Include Filters](http://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters).
Given the following directory `test`:
@ -47,10 +56,19 @@ test
└── bad.sh
```
we can upload _only_ the `results` subdirectory by using the following `options` in our task configuration:
We can upload _only_ the `results` subdirectory by using the following `options` in our task configuration:
```yaml
options:
- "--exclude '*'",
- "--include 'results/*'"
- "--exclude '*'"
- "--include 'results/*'"
```
### Region
Interacting with some AWS regions (like London) requires AWS Signature Version
4. This options allows you to explicitly specify region where your bucket is
located (if this is set, AWS_DEFAULT_REGION env variable will be set accordingly).
```yaml
region: eu-west-2
```

View file

@ -1,23 +1,14 @@
#!/bin/sh
# http://concourse.ci/implementing-resources.html#resource-check
set -e
# parse incoming config data
source "$(dirname $0)/common.sh"
payload=`cat`
bucket=$(echo "$payload" | jq -r '.source.bucket')
prefix="$(echo "$payload" | jq -r '.source.path // ""')"
bucket=$(get_bucket)
prefix="$(echo "$payload" | jq -r '.params.prefix // ""')"
# export for `aws` cli
AWS_ACCESS_KEY_ID=$(echo "$payload" | jq -r '.source.access_key_id')
AWS_SECRET_ACCESS_KEY=$(echo "$payload" | jq -r '.source.secret_access_key')
# Due to precedence rules, must be unset to support AWS IAM Roles.
if [ -n "$AWS_ACCESS_KEY_ID" ] && [ -n "$AWS_SECRET_ACCESS_KEY" ]; then
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
fi
export_aws_vars
# Consider the most recent LastModified timestamp as the most recent version.
timestamps=$(aws s3api list-objects --bucket $bucket --prefix "$prefix" --query 'Contents[].{LastModified: LastModified}')

25
assets/common.sh Normal file
View file

@ -0,0 +1,25 @@
get_bucket() {
local bucket=$(echo "$payload" | jq -r '.source.bucket')
test -z "$bucket" && { echo "Must supply source.bucket" >&2; exit 1; }
echo $bucket
}
export_aws_vars() {
local access_key_id=$(echo "$payload" | jq -r '.source.access_key_id // empty')
local secret_access_key=$(echo "$payload" | jq -r '.source.secret_access_key // empty')
local default_region=$(echo "$payload" | jq -r '.source.region // empty')
if [ -n "$access_key_id" ] && [ -n "$secret_access_key" ]; then
export AWS_ACCESS_KEY_ID=$access_key_id
export AWS_SECRET_ACCESS_KEY=$secret_access_key
fi
if [ -n "$AWS_DEFAULT_REGION" ]; then
export AWS_DEFAULT_REGION=$default_region
fi
}
emit_version() {
echo "{\"version\": {}}" >&3
}

View file

@ -1,8 +0,0 @@
#!/bin/sh
set -e
# give back a(n empty) version, so that the check passes when using `in`/`out`
echo "{
\"version\": {}
}"

View file

@ -1,6 +1,5 @@
#!/bin/sh
# Resource Impl: http://concourse.ci/implementing-resources.html#in:-fetch-a-given-resource
set -e
exec 3>&1 # make stdout available as fd 3 for the result
@ -12,26 +11,18 @@ if [ -z "$dest" ]; then
echo "usage: $0 <path/to/volume>"
exit 1
fi
#######################################
# parse incoming config data
source "$(dirname $0)/common.sh"
payload=`cat`
bucket=$(echo "$payload" | jq -r '.source.bucket')
path=$(echo "$payload" | jq -r '.source.path // ""')
options=$(echo "$payload" | jq -r '.source.options // [] | join(" ")')
bucket=$(get_bucket)
prefix=$(echo "$payload" | jq -r '.params.prefix // ""')
options=$(echo "$payload" | jq -r '.params.options // [] | join(" ")')
# export for `aws` cli
AWS_ACCESS_KEY_ID=$(echo "$payload" | jq -r '.source.access_key_id')
AWS_SECRET_ACCESS_KEY=$(echo "$payload" | jq -r '.source.secret_access_key')
# Due to precedence rules, must be unset to support AWS IAM Roles.
if [ -n "$AWS_ACCESS_KEY_ID" ] && [ -n "$AWS_SECRET_ACCESS_KEY" ]; then
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
fi
export_aws_vars
echo "Downloading from S3..."
eval aws s3 sync "s3://$bucket/$path" $dest $options
echo "...done."
eval aws s3 sync "s3://$bucket/$prefix" $dest $options
echo "Done."
source "$(dirname $0)/emit.sh" >&3
emit_version

View file

@ -1,6 +1,5 @@
#!/bin/sh
# Resource Impl: http://concourse.ci/implementing-resources.html#out:-update-a-resource.
set -e
exec 3>&1 # make stdout available as fd 3 for the result
@ -12,26 +11,21 @@ if [ -z "$source" ]; then
echo "usage: $0 </full/path/to/dir>"
exit 1
fi
#######################################
# parse incoming config data
source "$(dirname $0)/common.sh"
payload=`cat`
bucket=$(echo "$payload" | jq -r '.source.bucket')
path=$(echo "$payload" | jq -r '.source.path // ""')
options=$(echo "$payload" | jq -r '.source.options // [] | join(" ")')
bucket=$(get_bucket)
prefix=$(echo "$payload" | jq -r '.params.prefix // ""')
options=$(echo "$payload" | jq -r '.params.options // [] | join(" ")')
source_dir=$(echo "$payload" | jq -r '.params.source_dir // "."')
test -z "$source_dir" && { echo "Must supply params.source_dir" >&2; exit 1; }
# export for `aws` cli
AWS_ACCESS_KEY_ID=$(echo "$payload" | jq -r '.source.access_key_id')
AWS_SECRET_ACCESS_KEY=$(echo "$payload" | jq -r '.source.secret_access_key')
export_aws_vars
# Due to precedence rules, must be unset to support AWS IAM Roles.
if [ -n "$AWS_ACCESS_KEY_ID" ] && [ -n "$AWS_SECRET_ACCESS_KEY" ]; then
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
fi
cd $source/$source_dir
echo "Uploading to S3 from '$source_dir'..."
eval aws s3 sync $source/$source_dir "s3://$bucket/$prefix" $options
echo "Done."
echo "Uploading to S3..."
eval aws s3 sync $source "s3://$bucket/$path" $options
echo "...done."
source "$(dirname $0)/emit.sh" >&3
emit_version

View file

@ -1,8 +0,0 @@
{
"source": {
"access_key_id": "",
"secret_access_key": "",
"bucket": "",
"options": []
}
}

View file

@ -1,5 +0,0 @@
#!/bin/sh
set -e
docker build -t 18fgsa/s3-resource-simple .

View file

@ -1,13 +0,0 @@
#!/bin/sh
set -e
json=$(cat config.json)
source "$(dirname $0)/build.sh"
echo $json | docker run \
-i \
--rm \
18fgsa/s3-resource-simple \
/opt/resource/check

21
test/in
View file

@ -1,21 +0,0 @@
#!/bin/sh
set -e
dest=$1
if [ -z "$dest" ]; then
echo "usage: $0 </full/path/to/dest>"
exit 1
fi
json=$(cat config.json)
source "$(dirname $0)/build.sh"
echo $json | docker run \
-i \
--rm \
-v $dest:/tmp/output \
18fgsa/s3-resource-simple \
/opt/resource/in /tmp/output

View file

@ -1,21 +0,0 @@
#!/bin/sh
set -e
source=$1
if [ -z "$source" ]; then
echo "usage: $0 </full/path/to/dir>"
exit 1
fi
json=$(cat config.json)
source "$(dirname $0)/build.sh"
echo $json | docker run \
-i \
--rm \
-v $source:/tmp/input \
18fgsa/s3-resource-simple \
/opt/resource/out /tmp/input

View file

@ -1,27 +0,0 @@
# Pipeline that clones this repository, then uploads it to S3. Local usage:
#
# fly set-pipeline -t lite -n -c test/pipeline.yml -p s3-resource-simple-test -v access_key_id=<key> -v secret_access_key=<secret> -v aws-bucket=<bucket>
bucket: {{aws-bucket}}
#
resource_types:
- name: s3-upload
type: docker-image
source:
repository: 18fgsa/s3-resource-simple
resources:
- name: scripts
type: git
source:
uri: https://github.com/18F/s3-resource-simple
branch: master
- name: s3-bucket
type: s3-upload
source:
access_key_id: {{aws-access-key}}
secret_access_key: {{aws-secret-key}}
bucket: {{aws-bucket}}
jobs:
- name: custom-resource-example
plan:
- get: scripts
- put: s3-bucket