~tcarrio/clusta

Configuration, documentation, code, and more around my at-home cluster project.
feat: system config with tailscale module ssh and more
feat: add new docs around nix config, existing arch packages, and nuc boot iso
Adding stub for dnsmasq initial config

refs

master
browse  log 

clone

read-only
https://git.sr.ht/~tcarrio/clusta
read/write
git@git.sr.ht:~tcarrio/clusta

You can also use your local clone with git send-email.

#clusta

This project houses all of the configuration, documentation, code, and more around my at-home cluster project. This will provide a deeper look into multiple areas such as elastic computing, IoT, MaaS, and various popular projects that can play a part as a running service, networking layer, and more.

#table_of_contents

  1. hardware
  2. server_provisioning
  3. container_orchestration
  4. network_storage
  5. services_and_tasks
  6. references

#hardware

To start off, a list of the hardware involved in the project:

server_array:

  • 10x Intel NUCs with 64-bit N3700 Pentiums, Headless HDMI, Gigabit ethernet, and 128 GB Hynix m.2 SSDs
  • 5x Nvidia Jetson TK1s with GK20a SoC, 32-bit CPU and 192 SM3.2 core GPU
  • 1x Netgear Prosafe 16 Port Gigabit Switch GS116
  • 1x Mean Well LRS-350-12 12V Power Supply
  • 2x Mini Fuse Blocks
  • Cat6 ethernet wiring between all devices to switch with single exposed RJ45 port on rack

nas:

  • Supermicro motherboard
  • 1x Intel Celeron G1610T @ 2.30GHz
  • 2x 8GB DDR3 ECC RAM
  • 2x Gigabit ethernet controllers
  • 4x 1TB WD Red HDDs (hot-swappable)

#server_provisioning

An important focus for allowing easier management of each of the servers is to use a system built around provisioning hardware. An example of this, and one that I plan to investigate first, is Canonical's Metal-as-a-Service[0]. This automates the process of network booting a device, installing an operating system, and configuration this system. It provides this capability across the network, making it possible for a device to be readied for any purpose. From the brief introduction, it supports provisioning of Windows, Ubuntu, and CentOS. Furthermore, it integrates well with Canonical's Juju software[1], which provides the means of insetalling and configuring software onto provisioned MaaS devices.

#container_orchestration

Not sure what a cluster project would be without at least some manner of container orchestration. Whether it's messing around with Kubernetes[2], OKD[3], or Nomad[4], I'm hoping to automate provisioning and configuration of the platform with Juju or a similar product. I'm not terribly attached to the Kubernetes model, and love some of HashiCorp's projects, so I may lean that direction first. There are success stories[5] of people setting up Nomad on Raspberry Pi clusters that simplified the process over that of Kubernetes.

#network_storage

Currently my server setup is a CentOS server with Docker containers for any services. The disks are configured in RAID 5, but the logical volume is not in any way shared across the network. This is something that I would like to change in the cluster implementation. The server now would be replaced with a system focused on being either a NAS or other network available storage resource. I'm not intending on adding another storage device for redundancy and implementing a distributed solution like Ceph[6] necessarily, but it's an interesting topic I might research further. As of now, I would like to set up the server with a hypervisor such as Proxmox[7], running FreeNAS[8] as the primary role of the server.

#services_and_tasks

Honestly, Nomad allows basically everything to be managed. Beyond just containers, you can also manage services[9] and virtual machines[10] with similar task definitions.

#references

For further reference on the following topics of the project, see the following:

nucs:

jetsons:

nas_server:

virtualization:

networking:

container_orchestration:

maas:

general_links: