Network security is a pillar of a secure compute platform, especially on multi-tenant clusters. In this blog, we will look into constraining network access of OpenFaaS functions and kubernetes pods to isolate them from each other as well as external services.
Pod Networking with EKS
First, let us understand the network of our cluster and how pods gain network access to each other and the internet. Amazon EKS clusters are deployed in an Amazon Virtual Private Cloud (VPC), which creates a virtual network for deployed AWS resources (e.g. EC2 instances used for k8s nodes). Deployed AWS resources, EC2 instance nodes in our case, communicate through the VPC using a virtual network interface called the elastic network interface (ENI). An ENI can be configured with a primary private IP and a set of secondary private IPs from the range of valid IPs in the VPC, as well as a primary public IP, a Mac address, security groups and flags.
Pods are a set of containers deployed on EC2 instances, and so they will eventually need to gain network connectivity through the ENI on the node. This, specifically pod networking connectivity and communication, is enabled through Amazon VPC CNI plugin, which builds on CNCF’s Container Network Interface (CNI) project. A CNI plugin is a specification for configuring container networking that specifies how containers gain network connectivity through the host they are running on. Network plugins on kubernetes are configured through the kubelet. On an EKS cluster, the kubelet on each node is configured to use the Amazon VPC CNI network plugin. When a pod is created, the Kubelet invokes the network plugin, which creates a linked pair of virtual ethernet interfaces---one in the pod’s namespace and one in the host’s default namespace---and assigns the interfaces with a private IP from the ENI’s available secondary public IPs. Then it configures the necessary network rules (e.g. route tables) that allow traffic to flow out of the virtual ethernet interface to the ENI and into the VPC. Essentially turning a Pod into a virtual EC2 instance, running on an EC2 instance.
Kubernetes Network Policies and Calico
Now that we understand how pods communicate over a network, we need to constrain their network access to mitigate malicious activity over the network. We can do this at the cluster level using network policies. Network policies define rules that govern how groups of pods are allowed to communicate with different endpoints on the cluster. Policies are deployed in a namespace, and govern the network communication of a particular selection of pods.
The following NetworkPolicy is deployed in the secure-namespace
namespace, and applies to the pods labeled role=client
.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: secure-compute-network-policy
namespace: secure-namespace
spec:
podSelector:
matchLabels:
role: client
...
Policies can be either Ingress rules that govern incoming pod traffic, Egress rules that govern outgoing pod traffic, or both.
...
policyTypes:
- Ingress
- Egress
...
Rules can be defined across a whole namespace by using the namespaceSelector
, across a set of pods inside a namespace by using the namespaceSelector
in conjunction with the podSelector
, or across a particular IP CIDR range by using the ipBlock
selector. For example, the following policy allows incoming traffic from all pods in namespaces labeled project=myproject
and from pods labeled role=frontend
in the namespace the NetworkPolicy is deployed in (i.e. secure-namespace). As for outgoing traffic, it allows them from pods labeled role=frontend
in namespaces labeled project=myproject
and the static IP address 8.8.8.8.
...
ingress:
- from:
- namespaceSelector:
matchLabels:
deploy: local-nginx-service
- podSelector:
matchLabels:
role: secure-service
egress:
- to:
- ipBlock:
cidr: 8.8.8.8/32
- namespaceSelector:
matchLabels:
deploy: local-nginx-service
podSelector:
matchLabels:
role: nginx-gateway
Network policies are enabled through network plugins and network policy providers that support and implement NetworkPolicy resources. EKS clusters use the Amazon VPC CNI network plugin, which integrates well with Project Calico, a network policy engine that provides support for enforcing NetworkPolicy resources. Calico is deployed through a Kubernetes DamonSet resource that deploys a calico pod on each node that intercepts all pod traffic and filters them according to the deployed network policies.
AWS Security Groups
While AWS provides security groups, they cannot be used as an alternative to network policies, but can be used to complement them. Security groups define networking rules (e.g. ingress, egress) at the instance level (i.e. EC2 instances), and are unaware of the deployed cluster that uses these instances. As such, security groups can filter the type of traffic (e.g. SSH,HTTP) that can reach the nodes, and the source or destination IPs and ports the nodes can communicate with, but cannot regulate pod-pod network communication. Network policies, however, define networking rules at pod granularity, controlling the network traffic of the kubernetes cluster without any knowledge of the security groups configured for the cluster nodes. While both security groups and network policies can help us achieve network security, we will only be looking into Network Policies in this blog.
Putting it all together: Demo my Network
0. Let us continue from our setup in [Part 3](link to part 3)
1. First, let us deploy the Calico daemonset that will provide network policy support on our cluster.
# Deploy Calico DaemonSet
$ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.6/config/v1.6/calico.yaml
2. And deploy an internal nginx service.
# Create nginx service configuration
$ cat <<EOF | tee nginx-svc.yaml
kind: Namespace
apiVersion: v1
metadata:
name: nginx
labels:
role: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
EOF
# Deploy nginx service
$ kubectl apply -f nginx-svc.yaml
3. We will create and invoke a function, net-demo
, to demonstrate network internal access to our nginx service, and external network access to www.google.com
.
# Create a new function
$ faas-cli new net-demo --lang python3
# Update function handler code
$ cat << EOF | tee net-demo/handler.py
import requests
def handle(req):
if req == "nginx":
r = requests.get("http://my-nginx.nginx:80")
print(r.content)
elif req == "google":
r = requests.get("http://www.google.com")
print(r.content)
return
EOF
# Set dependencies
$ cat << EOF | tee net-demo/requirements.txt
requests
EOF
# Point container to docker registry
$ sed -i -e "s;image: net-demo:latest;image: $DOCKER_USER/net-demo:latest;" net-demo.yml
# Append gVisor profile
$ cat <<EOF | tee -a net-demo.yml
annotations:
com.openfaas.profile: gvisor
EOF
$ faas-cli up -f net-demo.yml
4. Let us verify the network connectivity of our function with two simple requests.
$ curl http://127.0.0.1:8080/function/net-demo -d "nginx"
$ curl http://127.0.0.1:8080/function/net-demo -d "google"
5. Now we will demonstrate pod-pod communication by creating a function fcomm-demo
that can communicate with the OpenFaaS gateway and trigger other functions.
# Create a new function
$ faas-cli new fcomm-demo --lang python3
# Update function handler code
$ cat << EOF | tee fcomm-demo/handler.py
import requests
import json
def handle(req):
json_req = json.loads(req)
function = json_req["function"]
f_data = json_req["data"]
r = requests.post("http://gateway.openfaas:8080/function/"+function, data=f_data)
print(r.content)
return
EOF
# Set dependencies
$ cat << EOF | tee fcomm-demo/requirements.txt
requests
EOF
# Point container to docker registry
$ sed -i -e "s;image: fcomm-demo:latest;image: $DOCKER_USER/fcomm-demo:latest;" fcomm-demo.yml
# Append gVisor profile
$ cat <<EOF | tee -a fcomm-demo.yml
annotations:
com.openfaas.profile: gvisor
EOF
# Deploy function
$ faas-cli up -f fcomm-demo.yml
6. Let us test the fcomm-demo
function by verifying we can access the net-demo
function through it.
$ curl http://127.0.0.1:8080/function/fcomm-demo -d '{ "function": "net-demo", "data": "nginx" }'
7. At this point, our functions have boundless network access. Let us apply a network policy to change that. We only want to allow ingress traffic from the OpenFaaS gateway and our nginx service, and egress traffic only to our nginx service and the kube-dns
pod that resolves internal host names. Since the kube-dns
pod is in the kube-system
namespace, and kubernetes network policies are label-based, we will need to label the kube-system namespace first.
# Label kube-system namespace
$ kubectl label namespace kube-system role=kube-system
# Create network policy
$ cat <<EOF | tee np_openfaas.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: openfaas-fn
name: openfaas-fn-constrict
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
role: openfaas-system
- namespaceSelector:
matchLabels:
role: nginx
egress:
- to:
- namespaceSelector:
matchLabels:
role: nginx
- namespaceSelector:
matchLabels:
role: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
EOF
# Apply network policy
$ kubectl apply -f np_openfaas.yaml
8. If everything is configured correctly, our pods should only be able to receive requests through the OpenFaaS gateway and send requests to the nginx service. Let us check.
# Nginx access should succeed
$ curl http://127.0.0.1:8080/function/net-demo -d "nginx"
# Google & function-function communication should hang and fail
$ curl http://127.0.0.1:8080/function/net-demo -d "google"
$ curl http://127.0.0.1:8080/function/fcomm-demo -d '{ "function": "net-demo", "data": "nginx" }'
Super! Now you have seen how to control and contain the outgoing and incoming network traffic in your kubernetes cluster.
It is a wrap!
In this blog series, we explored the importance of secure data computations, the secure container runtime technologies and how to use them to build a secure compute platform, the use of serverless functions for application development, and the benefits of applying network policies to enhance the platform’s security.
It does not end here! We recommend you also explore state of the art technologies provided by Amazon AWS for confidential computing. AWS Fargate enables running custom container images through EKS in an isolated setting. While AWS Lambda functions are serverless functions that execute user uploaded code and even container images. Both technologies are powered by Amazon Firecracker microVMs. For more security, including hardware isolation and cryptographic attestation, check out AWS Nitro Enclaves.
Good luck and thank you for reading!
References
- Amazon VPC: https://aws.amazon.com/vpc/
- Amazon ENI: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
- Amazon VPC CNI: https://github.com/aws/amazon-vpc-cni-k8s
- Pod Networking: https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html
- Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies/
- Deploying Calico on EKS: https://docs.aws.amazon.com/eks/latest/userguide/calico.html
- Amazon Security Groups: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html