Documentation

Learn

Securing a micro-service application on Kubernetes

In this tutorial, you will learn how to secure a micro-service application with Aporeto. We’ll use a slightly modified version of Google’s Hipster Shop as our micro-service application. The following diagram illustrates its architecture.

Architecture of hipster shop

Once you complete Setting up your cluster and Deploying the hipster shop application, you can complete any one of sections I-IV. However, we recommend completing these sections in sequence. Doing so will show how you can develop your network policies in a development environment and move them to your production environment, matching the life-cycle progression of a microservices application.

As you move through the sections of this tutorial, the requirements increase. Sections III and IV require apoctl. Section IV requires a second Kubernetes cluster.

Setting up your cluster

Prerequisites

Required

Optional

  • A second Kubernetes cluster
    • Minimum: 2 vCPUs per worker node
    • Minimum: 3 vCPUs total in the cluster
  • apoctl

Installing Aporeto

You can use any of the following methods to install Aporeto on your Kubernetes cluster(s).

  • Aporeto web interface quickstart: The easiest method because it does not require apoctl. To access this quickstart, click the Rocket icon in the top right and select Secure a Kubernetes cluster. As you step through the quickstart wizard and you are asked to choose a Trust Model, select Kubernetes.

NOTE

The quickstart wizard will create the necessary YAML files to install the Aporeto components in your Kubernetes cluster through kubectl and Helm commands. Ensure you have the Helm CLI installed as indicated above and you follow through on the prerequisite steps as indicated in the first section of the quickstart wizard.

IMPORTANT

Use cluster1 as the namespace for your first cluster. If you plan to complete section IV, use cluster2 as the namespace of your second cluster.

Deploying the hipster shop application

  1. Create the hipster-dev namespace in your Kubernetes cluster.

    kubectl create namespace hipster-dev
    
  2. Navigate to the hipster-dev namespace in the Aporeto UI that should have automatically been created by the Aporeto operator. We will be returning here once the pods are deployed.
    hipster-dev UI

  3. Returning to your terminal prompt, use the following command to deploy the hipster shop in the hipster-dev namespace in your Kubernetes cluster.

    kubectl create -f \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/hipster-dev.yaml
    

    TIP

    If you are deploying specifically in AWS/EKS, we recommend using nlb for the load balancer type. This recommendation applies throughout the tutorial for any service which is exposed as type load balancer in AWS/EKS.

  4. Check the status of the hipster-dev pods and services.

    watch kubectl get pods,svc -n hipster-dev
    

    NOTE

    The above command uses watch, which is not installed by default on macOS. While we recommend installing it, you can also omit the watch portion of the command and repeatedly issue the command until the pods achieve the necessary status.

  5. This command should return something like the following.

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/adservice-7ffcb6fdd4-846vl               1/1     Running   0          16m
    pod/cartservice-64fcb99689-5phc4             1/1     Running   2          21m
    pod/checkoutservice-89f9dcf5d-k5jpb          1/1     Running   0          21m
    pod/currencyservice-75c9dff8-bfx8c           1/1     Running   0          21m
    pod/emailservice-79cf797588-nht76            1/1     Running   0          21m
    pod/fake-attacker-758d7c6698-q66nr           1/1     Running   0          21m
    pod/frontend-79d9db89d9-z7l4s                1/1     Running   0          21m
    pod/loadgenerator-59f7f959dd-lnlbp           1/1     Running   0          21m
    pod/paymentservice-6c48cbf74d-tlhsc          1/1     Running   0          21m
    pod/productcatalogservice-656d6c65b6-qrhsb   1/1     Running   0          21m
    pod/recommendationservice-7c9b6b7796-d9bfl   1/1     Running   0          21m
    pod/redis-cart-598c9b7695-m2zsg              1/1     Running   0          21m
    pod/shippingservice-85d48cd7bb-b778m         1/1     Running   0          21m
    
    NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    service/adservice               ClusterIP      10.35.244.232   <none>        9555/TCP       21m
    service/cartservice             ClusterIP      10.35.249.117   <none>        7070/TCP       21m
    service/checkoutservice         ClusterIP      10.35.248.8     <none>        5050/TCP       21m
    service/currencyservice         ClusterIP      10.35.252.112   <none>        7000/TCP       21m
    service/emailservice            ClusterIP      10.35.250.68    <none>        5000/TCP       21m
    service/frontend                ClusterIP      10.35.241.88    <none>        80/TCP         21m
    service/frontend-external       LoadBalancer   10.35.247.62    34.94.76.6    80:30052/TCP   21m
    service/paymentservice          ClusterIP      10.35.241.63    <none>        50051/TCP      21m
    service/productcatalogservice   ClusterIP      10.35.249.32    <none>        3550/TCP       21m
    service/recommendationservice   ClusterIP      10.35.251.226   <none>        8080/TCP       21m
    service/redis-cart              ClusterIP      10.35.252.161   <none>        6379/TCP       21m
    service/shippingservice         ClusterIP      10.35.255.35    <none>        50051/TCP      21m
    
  6. Once all of the pods achieve a STATUS of Running, press CTRL+C to exit watch.

    NOTE

    If you deployed in AWS/EKS you can change the load balancer type for the frontend-external service using kubectl patch

       kubectl patch svc frontend-external -p \
       '{"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-type":"nlb"}}}' \
       -n hipster-dev
    

  7. Confirm you are able to access the application from an external source such as your laptop browser. The frontend-external service is exposed as a load balancer by default. If you are using a managed Kubernetes service like EKS, GKE, or AKS the load balancer should be created by automatically. In the above example the hipster shop should be accessible at the following IP address: http://34.94.76.6.

  8. Go shopping in the hipster store and make a fake purchase. Review the application flows that this generates in in the Aporeto web interface.

  9. From the Aporeto web interface, navigate to the hipster-dev namespace.

  10. Select Platform.

  11. Copy the following expression to your clipboard.

    $namespace then env then app
    
  12. Paste it in the grouping expression box to better organize the objects which represent the pods in the corresponding Kubernetes namespace. ui-grouping

    NOTE

    You have just entered an ordering of identity value properties present in each of the processing units created which represent the pods in the Kubernetes cluster. Aporeto allows for free-form grouping based on identity values.

  13. Locate the processing units that were created, each representing the pods in the hipster shop application.

  14. Select a pod and expand the drop-down to view its identity properties.

  15. Take notice of the project=companystore, the app=, and the env= identity value in the User Metadata section. We will use these identity values to secure the application in the next sections.

    NOTE

    • Dotted green lines indicate allowed connections from a default allow policy if no policy is defined.
    • Solid green lines indicate successful communication, policy defined.
    • Solid red lines indicate blocked communication.

  16. Observe the fake_attacker periodically connecting to the pods that are part of the micro-service application. We will secure the application against this attacker in the next section!

I. Encrypting pod-to-pod communications and restricting external connections

Overview

In this section, we show how you can secure a microservices application without in-depth knowledge of its inner workings. The pod label project=companystore automatically becomes a tag in Aporeto. We use this tag to:

  • Allow all of the pods that are a part of the application to communicate with each other.
  • Encrypt pod-to-pod communications.
  • Restrict pod communications outside of the cluster to the minimum necessary.

Importing the external network and network policy definition

We provide a predefined YAML file containing the external networks and network policies. You can use either of the following methods to import it.

Using apoctl

If you have apoctl installed, you can use the following command to import the YAML file.

cat <<EOF | apoctl api import -n $APOCTL_NAMESPACE/cluster1/hipster-dev -f -
APIVersion: 0
data:
  externalnetworks:
    - associatedTags:
        - 'ext:network=dns'
      description: all dns
      entries:
        - 0.0.0.0/0
      name: dns
      ports:
        - '53'
      protocols:
        - udp
        - tcp
    - associatedTags:
        - 'ext:network=any'
      description: ' any ip'
      entries:
        - 0.0.0.0/0
      name: internet
      protocols:
        - tcp
        - udp
    - associatedTags:
        - 'ext:network=metadata'
      description: cloud metadata
      entries:
        - 169.254.169.254
      name: metadata
      ports:
        - '80'
        - '443'
      protocols:
        - tcp
  networkaccesspolicies:
    - description: allow outbound cloud metadata
      logsEnabled: true
      name: cloud metadata
      object:
        - - 'ext:network=metadata'
      subject:
        - - project=companystore
    - description: ring fence policy
      encryptionEnabled: true
      logsEnabled: true
      name: company store
      object:
        - - project=companystore
      subject:
        - - project=companystore
    - description: allow dns
      name: dns
      object:
        - - 'ext:network=dns'
      subject:
        - - '\$identity=processingunit'
    - description: hipstershop
      logsEnabled: true
      name: frontend-inbound
      object:
        - - app=frontend
      subject:
        - - 'ext:network=any'
    - description: hipstershop
      logsEnabled: true
      name: outbound-allow
      object:
        - - 'ext:network=any'
      subject:
        - - app=emailservice
identities:
  - externalnetwork
  - networkaccesspolicy
label: Free Trial
EOF

Skip to Reviewing the results.

Using the Aporeto web interface

  1. Use the following command to download the ringfence.yaml file.

    wget \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/ringfence.yaml
    
  2. In the hipster-dev namespace in the Aporeto web interface, expand Data Management and select Import/Export.

  3. Drag and drop the ringfence.yaml file into the Import window.

  4. Select Import at the bottom to apply the configuration file.

Reviewing the results

  1. In the hipster-dev namespace in the Aporeto web interface, expand Network Authorization and select External Networks to review the external networks you just created.

  2. Expand Network Authorization and select Policies to review the policies. Expand the policy to understand it better. Click the Edit button to understand how network policies can be created from the web interface. Select Cancel to exit.

    IMPORTANT

    Do not modify any existing policies until you have finished the tutorial. If you modify any policies, repeat Importing the external network and network policy definition.

  3. Go shopping in the hipster shop and make a fake purchase.

  4. Select Platform. You may notice some red lines to Somewhere. These lines represent unauthorized data exfiltration from your application, blocked by the network policy we just applied. Notice the connections from the fake attacker (an external source) have begun to turned red, indicating the connections are blocked.

  5. Click on any green line. Observe the allowed communication flows under Access and associated policy under Policies. Notice the lock icon on the green flows indicating that Aporeto has enabled mutual TLS encryption between the pods in the application.

Congratulations!

  • You have secured the hipster shop.
  • You’ve blocked the attacking pod.
  • No IP addresses were used to secure the application.
  • The security applied is based on the cryptographically signed identity.

II. Restricting pod-to-pod traffic

Overview

In this section, we show you how to adopt a stronger security posture, sometimes referred to as zero trust. We will no longer assume that all pods within the hipster shop application can be trusted. Instead, we restrict pod-to-pod communications to the minimum necessary.

By blocking unnecessary communications between pods, we can minimize the blast radius of a compromised pod. For example, if an attacker gains access to the frontend pod, they will be unable to reach the PaymentService pod.

Each pod has a label defining their role using the following syntax: app=<role>. Our policies use these labels to block unnecessary pod traffic.

At this stage our hipster shop application is still under development. All of the pods have the label env-dev. In the next section, we will deploy the hipster shop application into production. The pods in the production hipster shop application will have the label env=prod. We will apply a policy in this section that uses these tags to prevent the pods in the development application from communicating with the pods in the production application.

Importing the external network and network policy definition

We provide a predefined YAML file containing the external networks and network policies. You can use either of the following methods to import it.

Using apoctl

If you have apoctl installed, you can use the following command to import the YAML file.

cat <<EOF | apoctl api import -n $APOCTL_NAMESPACE/cluster1/hipster-dev -f -
APIVersion: 0
data:
  externalnetworks:
    - associatedTags:
        - 'ext:network=dns'
      description: all dns
      entries:
        - 0.0.0.0/0
      name: dns
      ports:
        - '53'
      protocols:
        - udp
        - tcp
    - associatedTags:
        - 'ext:network=any'
      description: ' any ip'
      entries:
        - 0.0.0.0/0
      name: internet
      protocols:
        - tcp
        - udp
    - associatedTags:
        - 'ext:network=metadata'
      description: cloud metadata
      entries:
        - 169.254.169.254
      name: metadata
      ports:
        - '80'
        - '443'
  networkaccesspolicies:
    - description: allow outbound cloud metadata
      logsEnabled: true
      name: cloud-metadata
      object:
        - - 'ext:network=metadata'
      subject:
        - - project=companystore
    - description: ring fence policy
      disabled: true
      encryptionEnabled: true
      logsEnabled: true
      name: company-store
      object:
        - - project=companystore
      subject:
        - - project=companystore
    - description: allow dns
      name: dns
      object:
        - - 'ext:network=dns'
      subject:
        - - '\$identity=processingunit'
    - description: hipstershop
      logsEnabled: true
      name: frontend-inbound
      object:
        - - app=frontend
      subject:
        - - 'ext:network=any'
    - description: hipstershop
      logsEnabled: true
      name: outbound-allow
      object:
        - - 'ext:network=any'
      subject:
        - - app=emailservice
    - description: hipstershop
      encryptionEnabled: true
      logsEnabled: true
      name: cartservice
      object:
        - - app=redis-cart
      subject:
        - - app=cartservice
    - description: hipstershop
      encryptionEnabled: true
      logsEnabled: true
      name: checkoutservice
      object:
        - - app=emailservice
        - - app=paymentservice
        - - app=shippingservice
        - - app=currencyservice
        - - app=productcatalogservice
        - - app=cartservice
      subject:
        - - app=checkoutservice
    - description: hipstershop
      logsEnabled: true
      name: frontend
      object:
        - - app=adservice
        - - app=checkoutservice
        - - app=shippingservice
        - - app=currencyservice
        - - app=productcatalogservice
        - - app=recommendationservice
        - - app=cartservice
      subject:
        - - app=frontend
    - description: hipstershop
      logsEnabled: true
      name: load-generator
      object:
        - - app=frontend
      subject:
        - - app=loadgenerator
    - description: hipstershop
      encryptionEnabled: true
      logsEnabled: true
      name: recommendationservice
      object:
        - - app=productcatalogservice
      subject:
        - - app=recommendationservice
    - action: Reject
      description: env seperation
      logsEnabled: true
      name: deny-dev-to-prod
      object:
        - - env=prod
      subject:
        - - env=dev
    - action: Reject
      description: env separation
      logsEnabled: true
      name: deny-prod-to-dev
      object:
        - - env=dev
      subject:
        - - env=prod
identities:
  - externalnetwork
  - networkaccesspolicy
label: Free Trial
EOF

Skip to Reviewing the results.

Using the Aporeto web interface

  1. Use the following command to download the pod-to-pod.yaml file.

    wget \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/pod-to-pod.yaml
    
  2. In the hipster-dev namespace in the Aporeto web interface, expand Data Management and select Import/Export.

  3. Drag and drop the pod-to-pod.yaml file into the Import window.

  4. Select Import at the bottom to apply the configuration file.

Reviewing the results

  1. In the hipster-dev namespace in the Aporeto web interface, expand Network Authorization and select Policies.

  2. Review the new network policies, observing:

  3. Go shopping in the hipster shop and make a fake purchase on your secured application.

  4. Select Platform. You will notice some red lines to Somewhere. These lines represent unauthorized data exfiltration from your application, blocked by the network policy we just applied. Notice the connections from the fake attacker have turned red, indicating the connections are blocked.

  5. Click on any green line. Observe the allowed communication flows under Access and associated policy under Policies. Notice the lock icon on the green flows indicating that Aporeto has enabled mutual TLS encryption between the pods in the application.

Congratulations!

  • You have further secured the Hipster Shop with more granular network policies for a zero trust posture.
  • You’ve blocked the attacking pod.
  • No IP addresses were used to secure the application.
  • The security applied is based on the cryptographically signed identity.

III. Applying network policies as custom resource definitions

Overview

Aporeto creates custom resource definitions (CRDs) in Kubernetes. While you can create, read, update, and delete Aporeto network policy objects through Aporeto, you can alternatively manipulate these objects through the Kubernetes API. This can provide a smoother integration with your continuous integration and deployment pipelines.

In this section of the tutorial we will export the Aporeto YAML we created in the previous section for our development instance of the hipster shop application, transform it into Kubernetes objects, and use kubectl to apply it to a production instance of the same hipster shop application in the same cluster.

Prerequisite

This tutorial requires apoctl to be installed.

Reviewing the Aporeto custom resource definitions

  1. Use the following command to retrieve a list of the Aporeto CRDs.

    kubectl get crds | grep aporeto
    
  2. It should return something like the following.

    externalnetworks.api.aporeto.io                2019-06-30T05:43:28Z
    httpresourcespecs.api.aporeto.io               2019-06-30T05:43:28Z
    namespacemappingpolicies.api.aporeto.io        2019-06-30T05:43:28Z
    namespaces.api.aporeto.io                      2019-06-30T05:43:28Z
    networkaccesspolicies.api.aporeto.io           2019-06-30T05:43:28Z
    podinjectorselectors.k8s.aporeto.io            2019-06-30T05:43:28Z
    servicedependencies.api.aporeto.io             2019-06-30T05:43:28Z
    servicemappings.k8s.aporeto.io                 2019-06-30T05:43:28Z
    services.api.aporeto.io                        2019-06-30T05:43:28Z
    tokenscopepolicies.api.aporeto.io              2019-06-30T05:43:28Z
    
  3. Use the following command to retrieve an Aporeto network policy CRD.

    kubectl describe crds/networkaccesspolicies.api.aporeto.io
    
  4. It should return something like the following.

    Name:         networkaccesspolicies.api.aporeto.io
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  apiextensions.k8s.io/v1beta1
    Kind:         CustomResourceDefinition
    Metadata:
     Creation Timestamp:  2019-06-30T05:43:28Z
     Generation:          1
     Resource Version:    16092
     Self Link:
    /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/networkaccesspolicies.api.aporeto.io
     UID:                 fc6217d2-9af9-11e9-9a35-42010aa80027
    Spec:
     Additional Printer Columns:
       JSON Path:    .spec.action
       Description:  List of CIDRs or domain name.
       Name:         action
       Type:         string
    ...
    

Converting an Aporeto network policy to a Kubernetes CRD

  1. Create the hipster-prod namespace in Kubernetes.

    kubectl create namespace hipster-prod
    
  2. Convert the pod-to-pod.yaml developed in hipster-dev into a Kubernetes CRD and apply it a production instance which will run in the hipster-prod namespace.

    apoctl api import \
    --url https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/pod-to-pod.yaml \
    --to-k8s-crd \
    | sed -e 's/aporeto.io\/v1alpha1/api.aporeto.io\/v1beta1/g' \
    | kubectl create -f - -n hipster-prod
    
  3. Use the following commands to explore the network policies and external networks you just created.

    kubectl get networkaccesspolicies.api.aporeto.io \
     -n hipster-prod
    
    kubectl describe networkaccesspolicies.api.aporeto.io/frontend \
     -n hipster-prod
    
    kubectl get externalnetworks.api.aporeto.io \
     -n hipster-prod
    
    kubectl describe externalnetworks.api.aporeto.io/dns \
     -n hipster-prod
    

Deploying the hipster shop application

  1. Use the following command to deploy the hipster shop application into the hipster-prod namespace.

    kubectl create -f \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/hipster-prod.yaml
    
  2. Check on the status of the hipster shop deployment.

    kubectl get pods,svc -n hipster-prod
    
  3. Ensure that all pods and services achieve Running status and copy the External-IP of the frontend-external service.

  4. Paste the External-IP of the frontend-external service into a browser and confirm that you can place an order.

Reviewing the results

  1. Navigate to the hipster-prod namespace in the Aporeto web interface.

  2. Expand Network Authorization and select External Networks to review the external networks you just created.

  3. Expand Network Authorization and select Policies to review the policies.

  4. Select Platform. You will notice some red lines to Somewhere. These lines represent unauthorized data exfiltration from your application, blocked by the network policy we just applied. Notice the connections from the fake attacker have turned red, indicating the connections are blocked.

  5. Click on any green line. Observe the allowed communication flows under Access and associated policy under Policies. Notice the lock icon on the green flows indicating that Aporeto has enabled mutual TLS encryption between the pods in the application.

Congratulations!

  • You have secured your production application using the policies created in your development environment.
  • Notice the attacking pod is also blocked.
  • The identity-based policy model has been carried over and into the Kubernetes cluster using CRDs.

IV. Securing cross-cluster applications

Overview

In this section, we secure a production instance of hipster shop application that’s split across two Kubernetes clusters. The following diagram shows its split architecture.

cluster1 cluster2
svc-arch-c1 svc-arch-c2

We will export the Aporeto YAML we created in the previous sections for our development instance of the hipster shop application, and import the YAML into a different namespace in the Aporeto platform to secure the split cluster production instance of the same hipster shop application.

Prerequisites

This section requires a second Kubernetes cluster named cluster2 with Aporeto installed.

Preparing the clusters

  1. To save resources, go ahead and delete the hipster-prod namespace. We won’t be needing this any longer.

    kubectl delete namespace hipster-prod
    
  2. Create a hipster-multi namespace in both clusters. Assuming you have multiple clusters defined in your kube config, you can use the below commands.

    kubectl config get-contexts
    
    kubectl config use-context {{$CLUSTER1}}
    kubectl create namespace hipster-multi
    
    kubectl config use-context {{$CLUSTER2}}
    kubectl create namespace hipster-multi
    

Importing the external networks and network policies

In this section, we’ll export the policies from the hipster-dev namespace and import them into the hipster-prod namespace on each of the clusters.

TIP

If you did not complete the previous sections, run the commands below and skip to Deploying the split application.

apoctl api import \
  -n $APOCTL_NAMESPACE/cluster1/hipster-multi \
  --url https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/pod-to-pod.yaml
apoctl api import \
  -n $APOCTL_NAMESPACE/cluster2/hipster-multi \
  --url https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/pod-to-pod.yaml

  1. Export the external network definitions from the cluster1/hipster-dev namespace.

    apoctl api -n $APOCTL_NAMESPACE/cluster1/hipster-dev \
    export externalnetworks > hipster_ext_net.yaml
    
  2. Export the network policy definitions from the cluster1/hipster-dev namespace.

    apoctl api -n $APOCTL_NAMESPACE/cluster1/hipster-dev \
    export networkaccesspolicy > hipster_netpol.yaml
    
  3. Import the exported external network definition into the cluster1/hipster-multi namespace and the cluster2/hipster-multi namespace.

    apoctl api -n $APOCTL_NAMESPACE/cluster1/hipster-multi \
    import -f hipster_ext_net.yaml
    apoctl api -n $APOCTL_NAMESPACE/cluster2/hipster-multi \
    import -f hipster_ext_net.yaml
    
  4. Import the exported network policy into the cluster1/hipster-multi namespace and the cluster2/hipster-multi namespace.

    apoctl api -n $APOCTL_NAMESPACE/cluster1/hipster-multi \
    import -f hipster_netpol.yaml
    apoctl api -n $APOCTL_NAMESPACE/cluster2/hipster-multi \
    import -f hipster_netpol.yaml
    

Deploying the split application

  1. With your kubectl context set to cluster2 issue the following command to apply our svc-cluster-2.yaml file.

    kubectl create -f \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/svc-cluster-2.yaml
    
  2. It should return the following.

    service/cartservice created
    service/frontend created
    service/frontend-external created
    service/productcatalogservice created
    service/recommendationservice created
    service/redis-cart created
    
  3. Use the following command to review the services you just deployed in cluster2.

    kubectl get svc -n hipster-multi
    
  4. It should return something like the following.

    NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)            AGE
    cartservice             LoadBalancer   10.15.253.38    104.154.186.47   7070:30251/TCP     2m37s
    frontend                ClusterIP      10.15.250.183   <none>           80/TCP             2m37s
    frontend-external       LoadBalancer   10.15.249.187   34.68.150.252    80:30920/TCP       2m37s
    productcatalogservice   LoadBalancer   10.15.250.157   35.184.250.205   3550:32420/TCP     2m36s
    recommendationservice   ClusterIP      10.15.240.154   <none>           8080/TCP           2m36s
    redis-cart              ClusterIP      10.15.255.215   <none>           6379/TCP           2m36s
    

    IMPORTANT

    Ensure that the EXTERNAL-IP of the LoadBalancer services have populated before proceeding. It may take a few minutes. If these fields are not expected to be populated you can manually set the required environment variables.

  5. Run the command below to automatically set environment variables which will be used in the later steps.

    export $(kubectl get svc -n hipster-multi \
          -o jsonpath='{range.items[?(@.spec.type=="LoadBalancer")]}{.metadata.name}_SVC={.status.loadBalancer.ingress[].hostname}{.status.loadBalancer.ingress[].ip}{"\n"}' \
          | sed 's/frontend-/frontend/g' \
          | awk -F[=] '{ print toupper($1)"="$2 }'
    
  6. Alternatively, you can manually set the required environment variables. The values shown below are just examples. These examples match the example response from kubectl get svc above.

    export CARTSERVICE_SVC=104.154.186.47
    export FRONTENDEXTERNAL_SVC=34.68.150.252
    export PRODUCTCATALOGSERVICE_SVC=35.184.250.205
    
  7. Switch your kubeconfig context to cluster1, as shown below.

    kubectl config use-context {{$CLUSTER1}}
    
  8. Issue the following command to apply our svc-cluster-1.yaml file to cluster1.

    kubectl create -f \
    https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/svc-cluster-1.yaml
    
  9. It should return something like the following.

    service/emailservice created
    service/checkoutservice created
    service/paymentservice created
    service/currencyservice created
    service/shippingservice created
    service/adservice created
    
  10. Use the following command to review the services you just deployed in cluster1.

    kubectl get svc -n hipster-multi
    
  11. It should return something like the following.

    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)           AGE
    adservice         LoadBalancer   10.35.251.219   34.94.90.88    9555:31847/TCP    3m45s
    checkoutservice   LoadBalancer   10.35.250.45    34.94.68.198   5050:31845/TCP    3m46s
    currencyservice   LoadBalancer   10.35.250.72    34.94.108.84   7000:32157/TCP    3m46s
    emailservice      ClusterIP      10.35.245.81    <none>         5000/TCP          3m47s
    paymentservice    ClusterIP      10.35.243.196   <none>         50051/TCP         3m46s
    shippingservice   LoadBalancer   10.35.241.119   34.94.50.52    50051:30924/TCP   3m46s
    

    IMPORTANT

    Ensure that the EXTERNAL-IP of the LoadBalancer services have populated before proceeding. It may take a few minutes. If these fields are not expected to be populated you can manually set the required environment variables.

  12. Run the command below to automatically set environment variables which will be used in the later steps.

    export $(kubectl get svc -n hipster-multi \
          -o jsonpath='{range.items[?(@.spec.type=="LoadBalancer")]}{.metadata.name}_SVC={.status.loadBalancer.ingress[].hostname}{.status.loadBalancer.ingress[].ip}{"\n"}' \
          | sed 's/frontend-/frontend/g' \
          | awk -F[=] '{ print toupper($1)"="$2 }'
    
  13. Alternatively, you can manually set the required environment variables. The values shown below are just examples. These examples match the example response from kubectl get svc above.

    export ADSERVICE_SVC=34.94.90.88
    export CHECKOUTSERVICE_SVC=34.94.68.198
    export CURRENCYSERVICE_SVC=34.94.108.84
    export SHIPPINGSERVICE_SVC=34.94.50.52
    
  14. Confirm the necessary environment variables have been set. Manually set them if they are not.

    env | grep _SVC=
    
  15. It should return something like the following.

    CARTSERVICE_SVC=104.154.186.47
    FRONTENDEXTERNAL_SVC=34.68.150.252
    PRODUCTCATALOGSERVICE_SVC=35.184.250.205
    ADSERVICE_SVC=34.94.90.88
    CHECKOUTSERVICE_SVC=34.94.68.198
    CURRENCYSERVICE_SVC=34.94.108.84
    SHIPPINGSERVICE_SVC=34.94.50.52
    
  16. Apply the step 3 variables to the cluster1 deployment file using the command below:

    wget -O- -q https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/deployment-cluster-1.yaml \
    | sed \
    -e "s/{{PRODUCTCATALOG_SERVICE}}/$PRODUCTCATALOGSERVICE_SVC/g" \
    -e "s/{{CART_SERVICE}}/$CARTSERVICE_SVC/g" \
    | kubectl create -f - -n hipster-multi
    
  17. Switch contexts to cluster2.

    kubectl config use-context {{$CLUSTER2}}
    
  18. Apply the step 7 variables to the cluster2 deployment file using the command below:

    wget -O- -q https://raw.githubusercontent.com/aporeto-inc/microservices-demo/master/release/deployment-cluster-2.yaml \
    | sed \
    -e "s/{{CURRENCY_SERVICE}}/$CURRENCYSERVICE_SVC/g" \
    -e "s/{{SHIPPING_SERVICE}}/$SHIPPINGSERVICE_SVC/g" \
    -e "s/{{CHECKOUT_SERVICE}}/$CHECKOUTSERVICE_SVC/g" \
    -e "s/{{AD_SERVICE}}/$ADSERVICE_SVC/g" \
    | kubectl create -f - -n hipster-multi
    
  19. At this point the application should be fully deployed and accessible! Access frontend-external from a browser and ensure you can browse the hipster shop. Recall the IP address was saved as an environment variable.

    env | grep FRONTEND
    

Go shopping in the hipster shop

  1. Provided you have available resources in your cluster, scale out the product catalog deployment on cluster2.

    kubectl scale --replicas=3 deployment/productcatalogservice \
     -n hipster-multi
    
  2. In the Aporeto web interface, navigate to the parent namespace of cluster1 and cluster2 and select Platform.

  3. Copy and paste the following string into the Enter a filter box.

    namespace matches hipster-multi
    

You should see a view of the split application running across two clusters.

TIP

If you don’t see flows either access the application again or change the reported flows to the last five minutes. Move the groups around. Use two fingers to zoom in and out to create a comfortable view.

Congratulations!

  • You have secured your production, multi-cluster application instance with the same identity-based policy used to secure the single cluster development instance.
  • The communication between the pods, cross-cluster is encrypted and secured.
  • The attacking pod is also blocked.
  • The network policy did not have to be updated as you scaled the productcatalog micro-service.