~rycwo/blog

eb1153e63fdde5643dbb598ec5a47cf40226bae3 — Ryan Chan 2 years ago f407e3f
Add "Forging redplanet (Day 3): IMGUI 2D Canvas" post
1 files changed, 98 insertions(+), 0 deletions(-)

A _posts/2020-09-09-forging-redplanet-day-003-imgui-canvas.md
A _posts/2020-09-09-forging-redplanet-day-003-imgui-canvas.md => _posts/2020-09-09-forging-redplanet-day-003-imgui-canvas.md +98 -0
@@ 0,0 1,98 @@
---
published: true
layout: post
title: "Forging redplanet (Day 3): IMGUI 2D Canvas"
date: "2020-09-09"
tags:
    - "redplanet"
    - "forge"
    - "tau"
    - "imgui"
    - "linalg"
category: "devlog"
---
Day three! What a day! I finally managed to move past my writer's block - if you
can call it that - and make good progress on the canvas layout for Forge's IMGUI
as I had been wanting to. It's not a whole ton of work, but to paraphrase a good
friend of mine: when it comes to personal projects, the "initial inertia" is
the hardest bit. So I'll pat myself on the back and take this small win.

<!-- Excerpt -->

Rather than starting with a wall of text, I figured it would be much more
interesting to see the layout in action. Apologies in advance for the jittery
mouse movement, some evenings I work best off-desk so I was stuck with the
trackpad for navigation!

<video src="https://files.rycwo.xyz/borann_d9173fd.mp4" muted controls></video>

# Canvas components

Although it may not be obvious, the node graph pretty-much demonstrates all of
the IMGUI framework's basic systems working together. The dummy nodes, the rects
with the coral outlines, are positioned using `set_next_gui_position()` as
demonstrated in [Day 2][day-2]. The canvas layout itself is special in that it
does not dictate the precise position of the elements. Instead, it manages a
transformation matrix that transforms any elements drawn within its scope. The
matrix components are manipulated by mouse input.

I introduced the `begin_gui_transform()` and `begin_gui_clip()` functions in
order to set the active transformation matrix and clip rectangle respectively.
Both of these functions simply push data into a buffer which is then taken into
account in the shaders used to draw the GUI elements. Forge's IMGUI supports any
number of shaders to allow for complex GUI rendering if every necessary and it
is up to the developer to ensure the shader respects the transform/clip buffers.
More on the GUI shaders another day.

The implementation of the dots on the grid was something I mulled over for
quite a while. In case you're not already aware, I tend to overthink solutions
to simple problems. It gets even worse when it's for a personal project I care
about! Ultimately the decision came down to whether I should render all the dots
on a single rect via a custom fragment shader, or whether I should push a
handful of GUI elements using the existing circle primitive.  Bearing in mind a
custom shader would mean an additional draw call just for the grid and almost
all of the fragments will be transparent, I opted to just push each dot as a
separate GUI element. Thankfully, buffer memory both on the CPU and the GPU is
pre-allocated in a [slab-like][slab-alloc] manner on IMGUI initialization so we
can feel confident in the rapid creation of many GUI elements. Once again, I
will defer any lengthier discussion of IMGUI's memory allocation patterns to
another time.

# Pan/zoom transformation

Surprisingly, the implementation I struggled with most was the zoom behavior of
the layout. The pan was trivial, it was the zoom specifically that caught me off
guard. My mistake was in trying to approach the solution solely by thinking of
the elements being transformed within the canvas. While mathematically it does
boil down to doing just that, it helped immensely to frame the problem as a 2d
camera problem. With this in mind, a couple of points became clear:

- Scaling should be done about a pivot centered on the canvas container.
- Scaling towards the mouse position is a common behavior. We need to translate
  the view as we are scaling so that at some maximum scale the mouse position
  is at the center of the view.

It then became trivial to build a suitable transformation matrix $$C$$.

$$C = S_pSS_p^{-1}T$$

Where $$S_p$$ is the scale pivot, and $$S$$ and $$T$$ are the scale and
translation respectively. The key insight here is that unlike a regular object
transformation, we want to first translate, **then** scale, so that the view
behaves like a camera zooming in/out of objects that have already been moved in
space.

# What next?

Maybe the solution was pretty obvious, in any case, it works well and I am
happy. There are some other things, however, that are still bugging me hard.
You may have noticed the outline on the rects are looking a bit ugly, they're
missing a certain pixel-perfect crispness to them. The dots on the grid, in the
meantime, are _supposed_ to be beautiful anti-aliased circles. It is pretty
clear I will be knee-deep in shader programming for the next few days. I have my
sights set on nailing the shaders for the primitives so I will not have to visit
them again for a long while.

[day-2]: {{ site.baseurl }}{% post_url 2020-09-06-forging-redplanet-day-002-imgui-intro %}#api-examples

[slab-alloc]: https://en.wikipedia.org/wiki/Slab_allocation