Grammar fixes and code blocks for a couple names
Shorten title
Add syntax highlighting to code blocks
openssh
and sshuttle
This example shows how to tunnel into a Kubernetes cluster using an SSH service running in a pod on the server, and sshuttle
on the client/workstation. This method works like any other SSH connection to a regular machine, making it easy to get started for users that are accustomed to using SSH. Meanwhile, sshuttle
automatically handles DNS and routing into the cluster from a workstation, making it easy to access in-cluster resources while sshuttle
runs in the background.
The configuration provided here is meant to be an example that can be copied and edited separately, in particular around populating authorized_keys
for users/machines that need tunnel access, and/or customizing the ingress method for reaching the service, with this example including a LoadBalancer
service for cloud clusters.
Within the cluster, we are just running a stock build of openssh-server
provided here. The only modification is to also install python3
as required by sshuttle
. The container shouldn't require any special privileges beyond being reachable from outside the cluster. An example LoadBalancer Service is included to provide this access in cloud environments.
Users meanwhile must have ssh
and sshuttle
, the latter of which will handle tunnelling traffic from the local workstation over an SSH connection to the cluster.
There are two aspects of setting up access:
sshd
service needs to be deployed into the cluster, and access from outside the cluster needs to be enabled using e.g. a LoadBalancer
service.authorized_keys
file.End users who wish to use the tunnel service should provide the content of their ~/.ssh/id_rsa.pub
public key. The content should be a single line like ssh-rsa AAAAA[...]Twk= nick@computer
. They should NOT provide their private key, which is multiple lines long. Key formats other than rsa
(e.g. ed25519
) should work too but haven't been tested.
If the public key file does not exist yet, users can create a new key pair using ssh-keygen
. Each distinct machine (laptop, desktop, ...) should have a separate key pair to allow easy revocation as machines are decommissioned and/or reformatted.
The all public keys to be granted access to the tunnel should be included in the authorized_keys
entry of the sshd
ConfigMap. You should keep a copy of your ConfigMap safe somewhere, e.g. in git, so that the server can quickly be brought back in the event of e.g. a new cluster being created. The keys themselves are not especially sensitive but it wouldn't hurt to keep them relatively private.
Note that if you edit the sshd
ConfigMap after the sshd-0
pod has already started, the change will NOT take effect automatically. This is a standard caveat with ConfigMaps. You will need to restart the pod to pick up the config changes using e.g. kubectl delete pod sshd-0
. This will briefly disconnect any active sessions.
authorized_keys
entry in the sshd
ConfigMap. Each key should be entered one per line, as is standard for authorized_keys
. Per above, editing this later requires restarting the sshd-0
pod.sshd
pod/service in the cluster:
kubectl apply -f sshd.yaml
sshd-lb
Service should automatically configure a LoadBalancer endpoint where the service can be reached. This endpoint should be provided to any users that want to connect to the service:
kubectl get service sshd-lb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Note that if you want more isolation between users, you can also run multiple independent sshd
instances, each with their own LoadBalancers. The example configuration provided here assumes a single service for the cluster that's being shared across multiple users.
Once the service is running, users can connect and start accessing services inside the cluster over SSH using sshuttle
.
Test that the K8s service is reachable using regular ssh
. LB_HOSTNAME
is from the sshd-lb
Service as described above.
ssh-lb
service is routing to it.sshd
ConfigMap's authorized_keys
file, and that the sshd-0
pod has been deleted/restarted so that any ConfigMap changes have taken effect (check with kubectl exec sshd-0 -- cat /config/.ssh/authorized_keys
).ssh -v tunnel@LB_HOSTNAME
Install sshuttle
. This tool handles tunnelling DNS and network traffic over an SSH connection to the K8s cluster.
Connect to the SSH endpoint using sshuttle
. Note that this example command routes ALL internal/LAN IPs via the K8s cluster. The list of tunnelled subnets may be scoped down to only match the Pod/Service IP ranges being used by the cluster, but this can vary on a per-cluster basis. See also usage docs for more options.
sshuttle --dns -NHr tunnel@LB_HOSTNAME 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
With sshuttle
running in the background, test the connection via e.g. curl -v <SERVICE>.<NAMESPACE>.svc.cluster.local
for a service that's running in your cluster. For example curl -vk https://kubernetes.default.svc.cluster.local
should return a 401 Unauthorized error, as this is a default Kubernetes admin endpoint which expects additional credentials.
Without sshuttle
running:
$ curl -k https://kubernetes.default.svc.cluster.local
curl: (6) Could not resolve host: kubernetes.default.svc.cluster.local
With sshuttle
running:
$ curl -k https://kubernetes.default.svc.cluster.local
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
To stop the tunnel, just hit Ctrl+C on the sshuttle
process.
At the moment the openssh server is configured using some 3rd party init scripts provided via the linuxserver/openssh-server
image. We could switch to a fully stock image based on debian
or alpine
that just installs openssh-server
and python
, but that does start getting into NIH territory unless there's a specific reason to do it.
We could someday use something like this to restart the sshd
process automatically if/when the authorized_keys
list in the ConfigMap is edited.
As with any publically accessible SSH endpoint, it would make sense to set up fail2ban
or similar. However, the fail2ban
container would need access to firewall rules, likely on the host node. In the meantime, the pod is only configured to allow pubkey access, not passwords.
We could also restrict access of the tunnel pod within the cluster, using e.g. NetworkPolicy
rules. However this requires a network fabric that supports NetworkPolicy
and creates a large maintenance burden on ensuring that the NetworkPolicies are kept up to date. For now, the tunnel pod provides access to anything in the cluster.
Originally, we were intending to use Wireguard or OpenVPN, since they would theoretically be more flexible and more performant. However these both presented several issues when trying to use them as a K8s tunnel, both in terms of getting them working across a variety of K8s cluster environments, as well as the amount of friction that would be involved with getting users connected:
privileged
access in order to manipulate routing/firewall rules on the host machine.wireguard-go
or boringtun
userspace implementations - which still require a privileged
container to configure the network interfaces/routing at the host.In comparison, the SSH-based tunnel uses TCP and only requires setting up authorized_keys
to grant access, while also using a stock openssh
server at the pod and a stock ssh
client at the user's workstation.