2016-08-20

How a frame is rendered in Aether3D?

Aether3D is a component-based game engine supporting modern rendering APIs. Currently there is no lighting, but I'm porting my Forward+ implementation to it soon. While there is no lighting at the moment, there are directional and spot light shadows. In this post I will run through steps to render one frame.

Scene object contains GameObjects. GameObjects containing component types SpriteRendererComponent, MeshRendererComponent and TextRendererComponent will be rendered by those GameObjects that contain CameraComponents. Cameras can render into the screen or into a render-texture. If DirectionalLightComponent or SpotLightComponent has its shadow-flag enabled, those will cause a special camera to render their shadow maps.


Rendering steps:

1. Scene.Render()


This method first does housekeeping work needed to render a frame: resets statistics, begins a new render pass, acquires the next swapchain image depending on API etc.

Then it calculates an axis-aligned bounding box for the whole scene (needed for shadow frustum calculation) and updates transformation matrix hierarchy.

Then it collects game objects with camera components into a container and sorts it depending on camera type (render-texture, normal) and its layer etc.

2. Scene.RenderWithCamera()


First, render-texture cameras are looped and this method is called, once for a normal camera and six times for a cube map camera. Then, shadows are rendererd. Finally, cameras rendering directly into the screen are rendererd. If any cameras also want to render depth and normals into a texture (can be used later in post-processing and lighting effects), it's done after this step.

Camera's clear flag is applied at the beginning of this method (clear color, clear depth, don't clear).

At this point, skybox is rendered.
Then, camera's frustum is calculated.
Now game objects are looped and objects containing sprite renderer or text renderer are rendered. Mesh renderer objects are collected and sorted by their distance, then rendered. They reference a Material which feeds the blending state, culling state, shader and uniforms into the renderer.

3. GfxDevice::Draw()


Everything that's rendered uses a method from GfxDevice namespace:
void ae3d::GfxDevice::Draw( VertexBuffer& vertexBuffer, int startIndex, int endIndex, Shader& shader, BlendMode blendMode, DepthFunc depthFunc,
                            CullMode cullMode )


This method first calculates a pipeline state object (PSO) hash and if it's not found in cache, it creates a new PSO.
On Vulkan and D3D12 renderer, descriptor set is filled with draw parameters and the actual drawing uses vkCmdDrawIndexed() or DrawIndexedInstanced(). 

Future work

There is room for improvement, as the engine is still in its early stages (v0.6 under development).

1. PSO objects are expensive to generate so it would be better to generate them before the main loop.
2. There is no instancing support yet.
3. Too little profiling done so far as the main goal has been to get things to work on all APIs (Vulkan, OpenGL, Metal, D3D12).
4. Handling transparency, this is actually currently in development.