~ols/talks

b5ee92b404ff3d645eed573c00e2e99b8a538391 — Oliver Leaver-Smith 9 months ago fbed311 master
Update slides and notes links
1 files changed, 37 insertions(+), 20 deletions(-)

M talks/you-did-what/slides.md
M talks/you-did-what/slides.md => talks/you-did-what/slides.md +37 -20
@@ 1,4 1,4 @@
footer: Oliver Leaver-Smith // SBG TechEdge // 2019-08-22 // ols.wtf // @heyitsols
footer: Oliver Leaver-Smith // ols.wtf // @heyitsols
build-lists: true
slidenumbers: false



@@ 8,14 8,18 @@ slidenumbers: false

---

![filtered](/Users/ole09/Desktop/Screenshot\ 2019-08-06\ at\ 13.21.20.png)
![filtered](/Users/ole09/Desktop/Screenshot\ 2019-08-06\ at\ 13.20.20.png)
[.hide-footer]

![](/Users/ole09/Desktop/Screenshot\ 2019-08-06\ at\ 13.21.20.png)
![](/Users/ole09/Desktop/Screenshot\ 2019-08-06\ at\ 13.20.20.png)

^In Core, we look after many business critical applications such as customer onboarding, login, deposits, withdrawals, and safer gambling tools. I'm going to tell you the tale of a band of brave knights who bodly went where many sensibly did not tread, and learned why choosing the wrong tool for the job can sometimes be the right thing to do.

---

![filtered 150%](https://i.ytimg.com/vi/1naSDm8dSVc/maxresdefault.jpg)
[.hide-footer]

![150%](https://i.ytimg.com/vi/1naSDm8dSVc/maxresdefault.jpg)

^ Once upon a time, in the kingdom of SBG, lived a small tribe called Core. Squad after squad after squad of engineers all working on their applications, all deploying their apps in mostly the same way. With help from a powerful wizard called Jenkins, and for the purposes of continuing a metaphor the chef called Ruby. The estate was Virtual Machines as far as the eye could see. Here is how we deployed a change.



@@ 69,7 73,7 @@ ExecStart = /usr/local/bin/node src/index.js --port=8000
Restart = always
WorkingDirectory = /local/app/code/current/app
Environment = NODE_ENV=production
```
``` 

---



@@ 103,25 107,33 @@ From `ngctl` for managing Nagios downtime and acknolwedgements, to `fdctl` for m

---

[.hide-footer]

^ It was a little rough around the edge but it worked well

![filtered](https://cdn.shopify.com/s/files/1/1021/8649/products/volcanicturqoise-1_1024x1024.jpeg)
![](https://cdn.shopify.com/s/files/1/1021/8649/products/volcanicturqoise-1_1024x1024.jpeg)

---

![filtered](https://3yecy51kdipx3blyi37oute1-wpengine.netdna-ssl.com/wp-content/uploads/2019/01/bg-clouds.jpg)
[.hide-footer]

![](https://3yecy51kdipx3blyi37oute1-wpengine.netdna-ssl.com/wp-content/uploads/2019/01/bg-clouds.jpg)

^ Then one day King CTO and his court of Heads of Tech decreed that henceforth we would be working to a cloud native, and container first approach

---

![filtered](https://i.stack.imgur.com/SCiLk.png)
[.hide-footer]

![](https://i.stack.imgur.com/SCiLk.png)

^ Now the good people of Core weren't used to this approach, this way of working, and so they were scared. How could they symlink a local file on disk if there is no disk? How can they log on to a box to read logs if there is no box to log on to? Sidenote, we have a centralised logging solution, but it's nice to be able to see the logs for just one server on the console like our ancestors did before us.

---

![filtered](https://tr2.cbsistatic.com/hub/i/r/2019/02/09/2cd5793a-9ab8-4151-bedd-4a3452239fe8/resize/1200x/e978e620209611826dad93a1f6e2f9aa/datacenter.jpg)
[.hide-footer]

![](https://tr2.cbsistatic.com/hub/i/r/2019/02/09/2cd5793a-9ab8-4151-bedd-4a3452239fe8/resize/1200x/e978e620209611826dad93a1f6e2f9aa/datacenter.jpg)

^ We'd heard rumours of a magical Kubernetes cluster being built by a crack team north of the Core Tribe, but no one was sure whether it was ready to handle the amount of traffic we wanted to throw at it



@@ 133,7 145,9 @@ From `ngctl` for managing Nagios downtime and acknolwedgements, to `fdctl` for m

---

![filtered](https://thomlom.dev/static/4387ca6998348faf1e9767f958b216d2/4aca8/cover.jpg)
[.hide-footer]

![](https://thomlom.dev/static/4387ca6998348faf1e9767f958b216d2/4aca8/cover.jpg)

^ We needed a way for developers to write their applications with a cloud-native approach, without having to then unpick all that work to get the application live. And we were under no illusions that all this hard work would be turned off when we had the opportunity to move to Kubernetes



@@ 151,11 165,11 @@ From `ngctl` for managing Nagios downtime and acknolwedgements, to `fdctl` for m

---

# `docker` of course
# Docker of course

Not swarm, mesos, or anything like that
Not Swarm, Mesos, or anything like that

Just docker
Just Docker

We didn't need anything fancy



@@ 203,11 217,13 @@ What had we created?

---

[.hide-footer]

# More importantly

What do we call it?

![filtered](https://media1.giphy.com/media/3owzW5c1tPq63MPmWk/giphy.gif)
![](https://media1.giphy.com/media/3owzW5c1tPq63MPmWk/giphy.gif)

^ It started out as the tactical container platform, but that is boring. A few names were banded around, including the terms "artisanal container orchestration" and "if Kelham Island did docker". But given we in Core were using this as a stepping stone to Kubernetes, we settled on...



@@ 259,7 275,7 @@ The contents of `/local/app/etc/` is now `/opt/app.env`

# Dare I ask about the ctl script...?

`dockerctl` of course! This runs different docker commands based on the control command it receives (stop, start, restart, status, etc.)
`dockerctl` of course! This runs different `docker` commands based on the control command it receives (stop, start, restart, status, etc.)

---



@@ 278,8 294,8 @@ Nothing. Has. Changed
	* environment variables vs. config files
	* `stdout` vs. log files
* Ops are thinking cloud-first
	* Alternative log aggregation platforms
	* New and interesting ways of monitoring application performance
	* alternative log aggregation platforms
	* new and interesting ways of monitoring application performance

---



@@ 287,12 303,12 @@ Nothing. Has. Changed

## New login service

* Built on our tactical docker platform
* Built on Corbenetes
* Running in production
* We're so close to running public traffic through Kubernetes, I can smell it
* With *minimal* resource needed from the squad

^ The team managing the shared Kubernetes platform is ready for us to migrate our applications over. The developers in that squad have had to do *zero* work to support this. Minimal resource are things like "can you add an elastic-index field to the application logs please?"
^ Along with four other applications, our new login service The team managing the shared Kubernetes platform is ready for us to migrate our applications over. The developers in that squad have had to do *zero* work to support this. Minimal resource are things like "can you add an elastic-index field to the application logs please?"

---



@@ 304,6 320,7 @@ Nothing. Has. Changed

# [fit] Questions? [^1]

Slides/notes at https://ols.wtf/talks/you-did-what
* Slides at https://ols.wtf/talks/you-did-what/slides
* Notes at https://ols.wtf/talks/you-did-what/notes

[^1]: Not including statements, boasting, or requests to fix your problem