Debug Gizmos (part 2)

Debug Gizmos (part 2)

·

9 min read

This blog post is a direct continuation of the Debug Gizmos (part 1) article. The previous post covered what debug gizmos are, and their common uses within game engines. This blog is going to be expanding on the last one by explaining the creation process for debug lines, planes, cubes, and spheres. And then explaining how other engine systems can interact with the gizmos system to add debug elements to the screen.

Lines

There are two main ways to represent lines:

  • Storing a starting point and a direction

$$r=p{_0}+td$$

Where P0 is the starting position, d is the direction the ray is going, and t is the time multiplier.

  • Storing a start and end point (technically a line segment)

$$p{_0}=(x{_0},y{_0},z{_0})$$

$$p{_1}=(x{_1},y{_1},z{_1})$$

When approaching rendering lines, both of these options are possible to implement (line segments being much easier as they have a pre-defined start and end point). However, the decision as to which approach to use came down to what I intend to use the lines for.

Using the first definition, the result would be drawing infinite lines that cut through the entire world with no set bounds. This is not what I want. The other definition provides a fixed area, with a high level of control over positioning, so this is the one that I went with, as this was the aim for adding lines.

Here is an in-engine screenshot of using debug lines in use to depict an in-game camera’s view frustum:

The left hand side is the view from the debug camera, and the right hand side is the in-game camera showing the rendered view depicted by the frustum lines.

Implementation Details

In the previous post I wrote about how point rendering had some key nuances to it. The same nuances apply to line rendering. First off, a specific primitive topology is needed to be set in the graphics pipeline: VK_PRIMITIVE_TOPOLOGY_LINE_LIST.

Because of this topology setting, a whole new graphics pipeline is needed to be used when rendering lines (as well as two new shaders). When explaining point rendering I didn’t go into any detail as to how the shaders work to render them. Well, as point and line rendering work very similar in my engine, the explanation of line rendering below should be fairly easy to modify to see how to render points.

Vertex Shader

Here is the whole vertex shader for rendering lines. The code is glsl, and is written to be compiled into .spv files, which Vulkan then processes.

#version 450
#extension GL_KHR_vulkan_glsl : enable

// Descriptor set 0 is the binding for global camera data
layout(set = 0, binding = 0) uniform UniformBufferObject
{    
    mat4 previousViewMatrix;
    mat4 view;
    mat4 previousProjectionMatrix;
    mat4 projection;
    vec4 cameraPosition;
    vec4 cameraViewDirection;
    int frameID;
} cameraUBO;

// Stored like this because vec3's use the padding of vec4s
// So makes more sense to just store the individual floats
struct lineData
{
    float startPosition_X;
    float startPosition_Y;
    float startPosition_Z;

    float endPosition_X;
    float endPosition_Y;
    float endPosition_Z;

    float colour_R;
    float colour_G;
    float colour_B;
};

// Set 1 is for the line specific model matrix data
layout(std430, set = 1, binding = 0) buffer lineInstanceData
{
    uint validLineCount;
    lineData[] lineInstanceData;
} instanceSSBO;

layout(location = 0) out vec3 fragColor;
layout(location = 0) in vec3 inPosition;

void main() 
{
    vec3 position = vec3(instanceSSBO.lineInstanceData[gl_InstanceIndex].startPosition_X, instanceSSBO.lineInstanceData[gl_InstanceIndex].startPosition_Y, instanceSSBO.lineInstanceData[gl_InstanceIndex].startPosition_Z);

    if(gl_VertexIndex == 1)
        position = vec3(instanceSSBO.lineInstanceData[gl_InstanceIndex].endPosition_X, instanceSSBO.lineInstanceData[gl_InstanceIndex].endPosition_Y, instanceSSBO.lineInstanceData[gl_InstanceIndex].endPosition_Z);

    vec3 colour   = vec3(instanceSSBO.lineInstanceData[gl_InstanceIndex].colour_R, instanceSSBO.lineInstanceData[gl_InstanceIndex].colour_G, instanceSSBO.lineInstanceData[gl_InstanceIndex].colour_B);
    gl_Position   = cameraUBO.projection * cameraUBO.view * vec4(position, 1.0);
    fragColor     = colour;
}

How it works is through a large SSBO (Shader Storage Buffer Object) being passed into set 1. This buffer contains the data for each line in turn. Then, using which vertex ID (0 or 1) the shader is processing, the shader grabs the start or end position of the line, and transforming the vertex to that position. The two base positions passed in are both (0,0,0).

Fragment Shader

The fragment shaders for rendering these gizmos are really simple, as can be seen by the code snippet below:

#version 450
#extension GL_KHR_vulkan_glsl : enable

layout(location = 0) out vec4 fragColor;
layout(location = 0) in vec3 colour;

void main() 
{
    fragColor = vec4(colour, 1.0);
}

They just pass through the colour that was passed into then.

Planes

Like the lines section, there are multiple ways to represent planes.

  • By storing a normal and distance from origin (mathematical plane definition)

$$Ax+By+Cz+d=0$$

$$normal = (A,B,C)$$

  • By storing the four vertices that bind the plane. (This could technically be three points, but visually it is nicer to view four points)

$$P_{0}=(x_{0}, y_{0}, z_{0})$$

$$P_{1}=(x_{1}, y_{1}, z_{1})$$

$$P_{2}=(x_{2}, y_{2}, z_{2})$$

$$P_{3}=(x_{3}, y_{3}, z_{3})$$

  • The downside of this second approach is that there is no built-in enforcement of the points being on the same mathematical plane as one another. So, technically, this approach is just rendering two connected triangles.

The same as was with lines, both of these are possible to be rendered in-engine. But again, it is the use-case that is important. For my debug gizmo purposes I currently only need the second option, as the use case for infinite planes is much more niche, and the aim for debug planes is to be able to highlight an area, not split through the entire world into two halves.

Here is an in-engine screenshot of planes being used to depict the near and far planes of a camera’s view frustum (showing all of the frustum planes would be difficult to visually interpret, so only the near and far planes are shown):

There is a visible rendering issue in the above image - the far plane is being rendered in-front of the head model when it is actually behind it. This is because of the order independent transparency process not writing depth values to the depth buffer. So, when the gizmo rendering process runs it doesn’t have the depth info to correctly depth cull. This is an issue I will be looking into at some point in the future.

Pre-made shapes

The gizmos covered up to this point have been points, lines, and planes. All of these have had mathematical representations that can be used for better storing and modification in-engine. But what if you want to render a gizmo that is not a mathematical concept? This is where the next section comes into play: rendering pre-made shapes as gizmos.

Cubes

Cubes are a very useful gizmo as they can represent numerous things, ranging from 3D points that can be clicked and dragged, to bounding areas. Technically speaking what I am going to be implementing here are not going to be cubes, as all of their dimensions are not going to be identical. I’m going to be implementing 3D cuboids.

How they have been added is very similar to how the planes were added, but instead of having four vertices, they have eight. Then, what is passed into the vertex shader as instance data is the cube’s centre and half extents. This allows for the vertex shader to transform and scale a unit cube to being in the right position at the correct scale.

Here is a screenshot of a collection of cubes of different colours positioned and coloured using the gizmo system:

Spheres

Similar to cubes, spheres can serve lots of different purposes. These are implemented in exactly the same way as the debug cubes, but with different data in their VBOs and EBOs.

Below is a very similar screenshot as for the cube section, but with spheres instead:

A Useful Side Point

A useful situation came up when trying to generate the sphere’s vertices. As I was using polar coordinates to create the positions, it was fairly difficult to visualize if I had done the calculations correctly. To make this easier I hooked into the existing debug gizmo system to render points at all of the vertices.

This is what the final result looked like:

And, yes, it turned out that I had done the calculations incorrectly to begin with.

Another Side Point

For the sphere gizmo rendering, I could have simply hard-coded a bunch of sphere vertices and indices and called it a day. But instead the solution I went for was to generate the points of a sphere, given an amount of sub-divisions in the X and Y axis. Something like this:

std::vector<Maths::Vector::Vector3D<float>> GenerateSphere(float radius, unsigned int subDivisionsX, unsigned int subDivisionsY)

Which then uses three dimensional polar coordinates to loop around and generate all of the vertices.

Was this simple? No.

But was it worth it for the extra flexibility it gives? Probably.

External System Integration

Its all well and good having a debug gizmo system which a user can add and modify at will. But in practice it is not particularly useful. What would be a lot more useful is a way for external systems to call into and request certain gizmos be created and placed under their control. For example, a collision component may request that a bounding box be created and displayed around a specific volume.

High Level Overview

After a fair amount of consideration and planning, this is the system that I decided to implement. The code shown in this section is an example use case for displaying a camera’s frustum.

  • An external system calls into the gizmos system, and requests their gizmo’s be created:

  • Then the external system requests a weak pointer to hold onto their system:

  • Now the external system can create a list of requested modifications to their ‘owned’ gizmos, and can pass the list, plus their ownership pointer, into the gizmo system:

  • And when the external system wishes to have the gizmos be removed, then they just call into here:

Everything involving gizmo management is handled by the gizmo internal system, which is the ideal conditions for the system interactions as it makes it very hard to use incorrectly.

Conclusion

This blog post has taken a fair while to produce as life gets in the way sometimes, and I wanted to take more time now to make sure this system is as solid as it can be. Which will hopefully greatly speed up other areas of production later on.

So, to conclude, this article has covered creating debug lines, planes, cubes, and spheres. How they are rendered to the screen with some of the key nuances covered. And an overview of an external integration flow has been shown, which has been designed for ease of use.

The next article is finally going to be covering frustum culling, as I now have a way to visually see if the culling is giving the correct results. So that’s something to look forward to :D

As always, thank you for reading!