Skip to content

kubepack/aws-microservices-deploy-options

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

413 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploying Microservices on AWS Cloud

This repo contains a simple application that consists of three microservices. The sample application uses three services:

  1. “webapp”: Web application microservice uses “greeting” and “name”" microservice to generate a greeting for a person.

  2. “greeting”: A microservice that returns a greeting.

  3. “name”: A microservice that returns a person’s name based upon {id} in the URL.

Each application is deployed using different AWS Compute options. Each deployment model will highlight:

  1. Local dev, test and debug

  2. Deployment options

  3. Deployment pipeline

  4. Logging

  5. Monitoring

Build and Test Services using Maven

Change to the “services” directory: cd services

  1. Greeting service in Terminal 1: mvn -pl greeting wildfly-swarm:run

  2. Name service in Terminal 2: mvn -pl name wildfly-swarm:run

  3. Webapp service in Terminal 3: mvn -pl webapp wildfly-swarm:run

  4. Invoke the application in Terminal 4: curl http://localhost:8080/

Docker

Create Docker Images

mvn -pl <service> package -Pdocker where <service> is greeting, name or webapp

Docker images for all the services can be created from the services directory:

mvn package -Pdocker

By default, the Docker image name is arungupta/<service>. The image can be created in your repo:

mvn package -Pdocker -Ddocker.repo=<repo>

By default, the latest tag is used for the image. A different tag may be specified as:

mvn package -Pdocker -Ddocker.tag=<tag>

Push Docker Images to Registry

docker push <repo>/greeting:<tag>
docker push <repo>/name:<tag>
docker push <repo>/webapp:<tag>

This is a workaround until aws-samples#4 is fixed.

Deployment to Docker Swarm

  1. docker swarm init

  2. cd apps/docker

  3. docker stack deploy --compose-file docker-compose.yaml myapp

  4. Access the application: curl http://localhost:80

    1. Optionally test the endpoints:

  5. Remove the stack: docker stack rm myapp

Debug

  1. List stack:

    docker stack ls
  2. List services in the stack:

    docker stack services myapp
  3. List containers:

    docker container ls -f name=myapp*
  4. Get logs for all the containers in the webapp service:

    docker service logs myapp_webapp-service

Amazon ECS and AWS Fargate

This section will explain how to create Amazon ECS cluster. The three microservices will be deployed on this cluster using Fargate and EC2 mode.

Note
AWS Fargate is only supported in us-east-1 region at this time. The instructions will only work in that region.

Deployment: Create Cluster using AWS Console

This section will explain how to create an ECS cluster using AWS Console.

Deployment: Create Cluster using AWS CloudFormation

This section will explain how to create an ECS cluster using CloudFormation.

The following resources are needed in order to deploy the sample application:

  • Private Application Load Balancer for greeting and name and a public ALB for webapp

  • Security Group that allows the services to talk to each other

    1. Create an ECS cluster with these resources:

      cd apps/ecs/fargate/templates
      aws cloudformation deploy \
        --stack-name MyECSCluster \
        --template-file infrastructure.yaml \
        --region us-east-1 \
        --capabilities CAPABILITY_IAM
    2. View the output from the cluster:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[]' \
        --output text

Deployment: Simple ECS Cluster

This section explains how to create a ECS cluster with no additional resources. The cluster can be created with a private VPC or a public VPC. The CloudFormation templates for different types are available at https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/ECS/EC2LaunchType/clusters.

This section will create a 3-instance cluster using a public VPC:

curl -O https://raw.githubusercontent.com/awslabs/aws-cloudformation-templates/master/aws/services/ECS/EC2LaunchType/clusters/public-vpc.yml
aws cloudformation deploy \
  --stack-name MyECSCluster \
  --template-file public-vpc.yml \
  --region us-east-1 \
  --capabilities CAPABILITY_IAM

List the cluster using aws ecs list-clusters command:

{
    "clusterArns": [
        "arn:aws:ecs:us-east-1:091144949931:cluster/MyECSCluster-ECSCluster-197YNE1ZHPSOP"
    ]
}

Deployment: Deploy Fargate Service using AWS CLI

This section will explain how to deploy Fargate services using AWS CLI.

This section will use a cluster previously created using CloudFormation.

  1. Create target groups

    1. Get VPC id from the CloudFormation stack:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`VPC`].[OutputValue]' \
        --output text
    2. Create target group for greeting, name and webapp:

      aws elbv2 create-target-group \
        --name greeting-target-group \
        --protocol HTTP \
        --port 80 \
        --vpc-id <vpc> \
        --health-check-protocol HTTP \
        --health-check-path /\;
      aws elbv2 create-target-group \
        --name name-target-group \
        --protocol HTTP \
        --port 80 \
        --vpc-id <vpc> \
        --health-check-protocol HTTP \
        --health-check-path /\;
      aws elbv2 create-target-group \
        --name webapp-target-group \
        --protocol HTTP \
        --port 80 \
        --vpc-id <vpc> \
        --health-check-protocol HTTP \
        --health-check-path /\;

      Note down the ARNs for each target group.

  2. Configure target groups and rules for private ALB

    1. Get private ALB:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`ALBPrivate`].[OutputValue]' \
        --output text
    2. Create listener:

      aws elbv2 create-listener \
        --load-balancer-arn <private-alb-arn> \
        --protocol HTTP
        --port 80
        --default-actions Type=forward,TargetGroupArn=<name-target-group-arn>
    3. Note down the listener ARN from previous step. Add rules:

      aws elbv2 create-rule \
        --listener-arn <listener-arn> \
        --conditions Field=path-pattern,Values='/greeting-service' \
        --priority 10 \
        --actions Type=forward,TargetGroupArn=<greeting-target-group-arn>;
      aws elbv2 create-rule \
        --listener-arn <listener-arn> \
        --conditions Field=path-pattern,Values='/name-service' \
        --priority 10 \
        --actions Type=forward,TargetGroupArn=<name-target-group-arn>
  3. Configure target groups and rules for public ALB

    1. Get public ALB

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`ALBPublic`].[OutputValue]' \
        --output text
    2. Create listener:

      aws elbv2 create-listener \
        --load-balancer-arn <public-alb-arn> \
        --protocol HTTP
        --port 80
        --default-actions Type=forward,TargetGroupArn=<webapp-target-group-arn>
  4. Register task definitions

    1. Get the value of private ALB:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`ALBPrivateCNAME`].[OutputValue]' \
        --output text
    2. Update the value in webapp.json

    3. Register task definitions:

      aws ecs register-task-definition --cli-input-json file://greeting.json
      aws ecs register-task-definition --cli-input-json file://name.json
      aws ecs register-task-definition --cli-input-json file://webapp.json
    4. List task definitions:

      aws ecs list-task-definitions

      Note the task name and the version number from each ARN. This will be used for creting the service later.

  5. Create services

    1. Get the cluster name:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`ECSCluster`].[OutputValue]' \
        --output text
    2. Get private subnet:

      aws cloudformation \
        describe-stacks \
        --region us-east-1 \
        --stack-name MyECSCluster \
        --query 'Stacks[].Outputs[?OutputKey==`PrivateSubnet1`].[OutputValue]' \
        --output text
    3. Replace the <cluster-name>, <greeting-task-name-and-version> and <subnet> in the following command and create the greeting service:

      aws ecs create-service \
        --cluster <cluster-name> \
        --service-name greeting-service \
        --task-definition <greeting-task-name-and-version> \
        --desired-count 1 \
        --launch-type "FARGATE" \
        --network-configuration "awsvpcConfiguration={subnets=[<subnet>]}"
    4. Replace the <cluster-name>, <name-task-name-and-version> and <subnet> in the following command and create the name service:

      aws ecs create-service \
        --cluster <cluster-name> \
        --service-name name-service \
        --task-definition <name-task-name-and-version> \
        --desired-count 1 \
        --launch-type "FARGATE" \
        --network-configuration "awsvpcConfiguration={subnets=[<subnet>]}"
    5. Replace the <cluster-name>, <webapp-task-name-and-version> and <subnet> in the following command and create the name service:

      aws ecs create-service \
            --cluster <cluster-name> \
            --service-name webapp-service \
            --task-definition <webapp-task-name-and-version> \
            --desired-count 1 \
            --launch-type "FARGATE" \
            --network-configuration "awsvpcConfiguration={subnets=[<subnet>]}"

Deployment: Deploy Tasks and Service using ECS CLI

This section will explain how to create an ECS cluster using a CloudFormation template. The tasks are then deployed using ECS CLI and Docker Compose definitions.

Pre-requisites

  1. Install ECS CLI.

  2. Install - Perl.

Deploy the application

  1. Run the CloudFormation template to create the AWS resources:

    Region

    Launch Template

    N. Virginia (us-east-1)

    deploy to aws
  2. Run the follow command to capture the output from the CloudFormation template as key/value pairs in the file ecs-cluster.props. These will be used to setup environment variables which are used subseqently.

    aws cloudformation describe-stacks \
      --stack-name aws-microservices-deploy-options-ecscli \
      --query 'Stacks[0].Outputs' \
      --output=text | \
      perl -lpe 's/\s+/=/g' | \
      tee ecs-cluster.props
  3. Setup the environment variables using this file:

    set -o allexport
    source ecs-cluster.props
    set +o allexport
  4. Configure ECS CLI:

    ecs-cli configure --cluster $ECSCluster --region us-east-1 --default-launch-type FARGATE
  5. Create the task definition parameters for each of the service:

    ecs-params-create.sh greeting
    ecs-params-create.sh name
    ecs-params-create.sh webapp
  6. Start the greeting service up:

    ecs-cli compose --verbose \
      --file greeting-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_greeting.yaml \
      --project-name greeting \
      service up \
      --target-group-arn $GreetingTargetGroupArn \
      --container-name greeting-service \
      --container-port 8081
  7. Bring the name service up:

    ecs-cli compose --verbose \
      --file name-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_name.yaml  \
      --project-name name \
      service up \
      --target-group-arn $NameTargetGroupArn \
      --container-name name-service \
      --container-port 8082
  8. Bring the webapp service up:

    ecs-cli compose --verbose \
      --file webapp-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_webapp.yaml \
      --project-name webapp \
      service up \
      --target-group-arn $WebappTargetGroupArn \
      --container-name webapp-service \
      --container-port 8080

    Docker Compose supports environment variable substitution. The webapp-docker-compose.yaml uses $PrivateALBCName to refer to the private Application Load Balancer for greeting and name service.

  9. Check the healthy status of different services:

    aws elbv2 describe-target-health \
      --target-group-arn $GreetingTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
    aws elbv2 describe-target-health \
      --target-group-arn $NameTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
    aws elbv2 describe-target-health \
      --target-group-arn $WebappTargetGroupArn \
      --query 'TargetHealthDescriptions[0].TargetHealth.State' \
      --output text
  10. Once all the services are in healthy state, get a response from the webapp service:

    curl http://"$ALBPublicCNAME"
    Hello Sheldon

Tear down the resources

ecs-cli compose --verbose \
      --file greeting-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_greeting.yaml \
      --project-name greeting \
      service down
ecs-cli compose --verbose \
      --file name-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_name.yaml \
      --project-name name \
      service down
ecs-cli compose --verbose \
      --file webapp-docker-compose.yaml \
      --task-role-arn $ECSRole \
      --ecs-params ecs-params_webapp.yaml \
      --project-name webapp \
      service down
aws cloudformation delete-stack --region us-east-1 --stack-name aws-microservices-deploy-options-ecscli

Deployment: Create Cluster and Deploy Fargate Tasks using CloudFormation

This section creates an ECS cluster and deploys Fargate tasks to the cluster:

Region

Launch Template

N. Virginia (us-east-1)

deploy to aws

Retrieve the public endpoint to test your application deployment:

aws cloudformation \
    describe-stacks \
    --region us-east-1 \
    --stack-name aws-compute-options-fargate \
    --query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
    --output text

Use the command to test:

curl http://<public_endpoint>

Deployment: Create Cluster and Deploy EC2 Tasks using CloudFormation

This section creates an ECS cluster and deploys EC2 tasks to the cluster:

Region

Launch Template

N. Virginia (us-east-1)

deploy to aws

Retrieve the public endpoint to test your application deployment:

aws cloudformation \
    describe-stacks \
    --region us-east-1 \
    --stack-name aws-compute-options-ecs \
    --query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
    --output text

Use the command to test:

curl http://<public_endpoint>

Deployment Pipeline: EC2-mode with AWS CodePipeline

Deployment Pipeline: Fargate with AWS CodePipeline

This section will explain how to deploy a Fargate task via CodePipeline

  1. Create a fork of the Github repository that contains the Amazon ECS Sample App.

  2. Clone the forked repository to your local machine:

    git clone https://github.com/<your_github_username>/ecs-demo-php-simple-app
  3. Create the CloudFormation stack

Region

Launch Template

N. Virginia (us-east-1)

deploy to aws

The CloudFormation template requires the following input parameters:

  1. Cluster Configuration

    1. Launch Type: Select Fargate.

  2. GitHub Configuration

    1. Repo: The repository name for the sample service. This has to be the forked repo.

    2. Branch: The branch of the repository to deploy continuously, e.g. master.

    3. User: Your GitHub username.

    4. Personal Access Token: A token for the user specified above. Use https://github.com/settings/tokens to create a new token. See Creating a personal access token for the command line for more details.

The CloudFormation stack has the following outputs:

  • ServiceUrl: The URL of the sample service that is being deployed.

  • PipelineUrl: A deep link for the pipeline in the AWS Management Console.

Once the stack has been provisioned, click the link for the PipelineUrl. This will open the CodePipline console. Clicking on the pipeline will display a diagram that looks like this:

Fargate Pipeline

Now that a deployment pipeline has been established, you can modify files in the repository we cloned earlier and push your changes to GitHub which will cause the following actions to occur:

  • The latest changes will be pulled from GitHub.

  • A new Docker image will be created and pushed to ECR.

  • A new revision of the task definition will be created using the latest version of the Docker image.

  • The service definition will be updated with the latest version of the task definition.

  • ECS will deploy a new version of the Fargate task.

Cleaning up the example resources

To remove all the resources created by the example, do the following:

  1. Delete the main CloudFromation stack which deletes the sub stacks and resouces.

  2. Manually delete the resources which may contain content:

    1. S3 Bucket: ArtifactBucket

    2. ECR Repository: Repository

Monitoring: AWS X-Ray

Monitoring: Prometheus and Grafana

Kubernetes

Deployment: Create Cluster using kops

  1. Install kops

    brew update && brew install kops
  2. Create an S3 bucket and setup KOPS_STATE_STORE:

    aws s3 mb s3://kubernetes-aws-io
    export KOPS_STATE_STORE=s3://kubernetes-aws-io
  3. Define an envinronment variable for Availability Zones for the cluster:

    export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"
  4. Create cluster:

    kops create cluster \
      --name=cluster.k8s.local \
      --zones=$AWS_AVAILABILITY_ZONES \
      --yes

By default, it creates a single master and 2 worker cluster spread across the AZs.

Deployment: Standalone Manifests

Make sure kubectl CLI is installed and configured for the Kubernetes cluster.

  1. Apply the manifests: kubectl apply -f apps/k8s/standalone/manifest.yml

  2. Access the application: curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  3. Delete the application: kubectl delete -f apps/k8s/standalone/manifest.yml

Deployment: Helm

Make sure kubectl CLI is installed and configured for the Kubernetes cluster. Also, make sure Helm is installed on that Kubernetes cluster.

  1. Install the Helm CLI: brew install kubernetes-helm

  2. Install Helm in Kubernetes cluster: helm init

  3. Install the Helm chart: helm install --name myapp apps/k8s/helm/myapp

    1. By default, the latest tag for an image is used. Alternatively, a different tag for the image can be used:

      helm install --name myapp apps/k8s/helm/myapp --set "docker.tag=<tag>"
  4. Access the application:

    curl http://$(kubectl get svc/myapp-webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  5. Delete the Helm chart: helm delete --purge myapp

Deployment: Ksonnet

Make sure kubectl CLI is installed and configured for the Kubernetes cluster.

  1. Install ksonnet from homebrew tap: brew install ksonnet/tap/ks

  2. Change into the ksonnet sub directory: cd apps/k8s/ksonnet/myapp

  3. Add the environment: ks env add default

  4. Deploy the manifests: ks apply default

  5. Access the application: curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  6. Delete the application: ks delete default

Deployment: Kubepack

Make sure kubectl and kubepack CLI is installed and configured for the Kubernetes cluster. To install kubepack cli, run:

wget -O pack https://github.com/kubepack/pack/releases/download/0.1.0/pack-darwin-amd64 \
  && chmod +x pack \
  && sudo mv pack /usr/local/bin/
  1. Move to package root directory: cd apps/k8s/kubepack

  2. Pull dependent packages: pack dep -f .

  3. Now you have to combine the manifests for this package and its dependencies and potential patches into the final manifests. To do that, run : pack up -f . This will create manifests/output folder with an installer script and final manifests.

  4. Install package: ./manifests/output/install.sh

  5. Access the application: curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

  6. Delete the application: kubectl delete -R -f manifests/output/github.com/aws-samples/aws-microservices-deploy-options/apps/k8s/kubepack

Deployment Pipeline: AWS Codepipeline

This section explains how to setup a deployment pipeline using AWS CodePipeline.

CloudFormation templates for different regions are listed at https://github.com/aws-samples/aws-kube-codesuite. us-west-2 is listed below.

Region

Launch Template

Oregon (us-west-2)

deploy to aws

Deployment Pipeline: Jenkins

Create a deployment pipeline using Jenkins X.

  1. Install Jenkins X CLI:

    brew tap jenkins-x/jx
    brew install jx
  2. Create the Kubernetes cluster:

    jx create cluster aws

    This will create a Kubernetes cluster on AWS using kops. This cluster will have RBAC enabled. It will also have insecure registries enabled. These are needed by the pipeline to store Docker images.

  3. Clone the repo:

    git clone https://github.com/arun-gupta/docker-kubernetes-hello-world
  4. Import the project in Jenkins X:

    jx import

    This will generate Dockerfile and Helm charts, if they don’t already exist. It also creates a Jenkinsfile with different build stages identified. Finally, it triggers a Jenkins build and deploy the application in a staging environment by default.

  5. View Jenkins console using jx console. Select the user, project and branch to see the deployment pipeline.

  6. Get the staging URL using jx get apps and view the output from the application in a browser window.

  7. Now change the message in displayed from HelloHandler and push to the GitHub repo. Make sure to change the corresponding test as well otherwise the pipeline will fail. Wait for the deployment to complete and then refresh the browser page to see the updated output.

Deployment Pipeline: Spinnaker

Monitoring: AWS X-Ray

  1. arungupta/xray:us-west-2 Docker image is already available on Docker Hub. Optionally, you may build the image:

    cd config/xray
    docker build -t arungupta/xray:latest .
    docker image push arungupta/xray:us-west-2
  2. Deploy the DaemonSet: kubectl apply -f xray-daemonset.yaml

  3. Deploy the application using Helm charts

  4. Access the application

  5. Open the X-Ray console and watch the service map and traces. This is tracked as #60.

Monitoring: Prometheus and Grafana

AWS Lambda

Deployment: Package Lambda Functions

cd services; mvn clean package -Plambda

Deployment: Deploy using Serverless Application Model

Serverless Application Model (SAM) defines a standard application model for serverless applications. It extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.

sam is the AWS CLI tool for managing Serverless applications written with SAM. Install SAM CLI as:

npm install -g aws-sam-local

The complete installation steps for SAM CLI are at https://github.com/awslabs/aws-sam-local#installation.

  1. Serverless applications are stored as a deployment packages in a S3 bucket. Create a S3 bucket:

    aws s3api create-bucket \
      --bucket aws-microservices-deploy-options \
      --region us-west-2 \
      --create-bucket-configuration LocationConstraint=us-west-2`

    Make sure to use a bucket name that is unique.

  2. Package the SAM application. This uploads the deployment package to the specified S3 bucket and generates a new file with the code location:

    cd apps/lambda
    sam package \
      --template-file sam.yaml \
      --s3-bucket <s3-bucket> \
      --output-template-file \
      sam.transformed.yaml
  3. Create the resources:

    sam deploy \
      --template-file sam.transformed.yaml \
      --stack-name aws-microservices-deploy-options-lambda \
      --capabilities CAPABILITY_IAM
  4. Test the application:

    1. Greeting endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='GreetingApiEndpoint'].[OutputValue]" \
        --output text`
    2. Name endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='NamesApiEndpoint'].[OutputValue]" \
        --output text`
    3. Webapp endpoint:

      curl `aws cloudformation \
        describe-stacks \
        --stack-name aws-microservices-deploy-options-lambda \
        --query "Stacks[].Outputs[?OutputKey=='WebappApiEndpoint'].[OutputValue]" \
        --output text`/1

Deployment: Test SAM Local

In Mac

  1. sam local start-api --template sam.yaml --env-vars test/env-mac.json

  2. Greeting endpoint: curl http://127.0.0.1:3000/resources/greeting

  3. Name endpoint:

  4. Webapp endpoint: curl http://127.0.0.1:3000/

In Windows

  1. sam local start-api --template sam.yaml --env-vars test/env-win.json

  2. Test the urls above in browser

Deployment Pipeline: AWS CodePipeline

This section will explain how to deploy Lambda + API Gateway via CodePipeline.

  1. cd apps/lambda

  2. aws cloudformation deploy --template-file pipeline.yaml --stack-name aws-compute-options-lambda-pipeline --capabilities CAPABILITY_IAM

  3. git remote add codecommit $(aws cloudformation describe-stacks --stack-name aws-compute-options-lambda-pipeline --query "Stacks[].Outputs[?OutputKey=='RepositoryHttpUrl'].OutputValue" --output text)

  4. Setup your Git credential by following the document. This is required to push the code into the CodeCommit repo created in the CloudFormation stack. When the Git credential is setup, you can use the following command to push in the code and trigger the pieline to run.

    git push codecommit master
  5. Get the URL to view the deployment pipeline:

    aws cloudformation \
          describe-stacks \
          --stack-name aws-compute-options-lambda-pipeline \
          --query "Stacks[].Outputs[?OutputKey=='CodePipelineUrl'].[OutputValue]" \
          --output text

    Deployment pipeline in AWS console looks like as shown:

    Lambda Pipeline

Deployment: Composition using AWS Step Functions

Monitoring: AWS X-Ray

Deployment: Remove the stack

  1. aws cloudformation delete-stack --stack-name aws-compute-options-lambda

License

This library is licensed under the Amazon Software License.

About

This repo contains a simple application that consists of three microservices. Each application is deployed using different Compute options on AWS.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Java 94.4%
  • Shell 4.5%
  • Smarty 1.1%