Wednesday 24 July 2013

Minifig Factory


The new progress today was a minifig factory. It diverged from my common factory pattern of having a pre-populated database of creatable objects that are requested by ID. Instead, a MinifigParameters object contains a description of each element and which bone it attaches too.

The MinifigParameters object is passed to the factory, which returns a fully constructed and configured minifig including a render component that can be added to the scene.

For ease of use the MinifigFactory creates a Minifig skeleton that describes the physics hierarchy. It creates all of the mesh instances with a MeshRenderComponent and ties everything together based on the MinifigParameters passed in.

Lastly, and as a temporary measure, the MinifigFactory also constructs an example MinifigParameters object that I can use. I'll probably end up with a MinifigParametersDatabase that can retrieve parameters objects that describe specific Minifigs by unique ID, or wrap that process in the MinifigFactory so you can request minfigs by unique ID or by passing in a parameters object.

Solving the minifig construction task was a good self-contained process. The next step is probably adding the WalkRun controller with camera and then the animation controller after that.

Until next time, stay square!

Sunday 21 July 2013

This kinda worked


Showing off the detail, this kinda came together. There are still a few imperfections in the geometry that I'm going to resolve manually at a later date but what I've got is suitable for now.


Its 1AM and the mesh is in a usable state for animation, which is a good place to finish the weekends work.

The next task is to add a Walk/Run controller to the Input and Camera stacks, and tie them to the minifig locomotion.  Once its moving in-game then its time to write an animation library. I think the first pass will be FK bones with a single bone-per-vertex, CPU skinned.
This will be my reference animator and then I'll reproduce it with GPU skinning and switch up if the video hardware is capable.

I'm considering rigging for two-bones-per-vertex skinning and pre-multipling the inverse T matrix during mesh cooking, but I think that's going too far for now. I'll only need one bone per vertex for a long time yet and will be able to get the animation fidelity I'm after.

With beauty shots like this, it becomes apparent how poor the specular highlights are on my reference shader, so a new plastic shader is in the immediate future too.


Submesh breakdown


Because I had a couple of minutes, I rewrote the LDR importer to divide the mesh into discrete submesh components and now post processs each component for normals and smoothing before recombining them.
Sections with a shared triangle edge are treated as the same submesh, and I just walk all of the triangles edges to build each one.

Here is a composition of the AirTanks(3838) and Hips(3815) meshes with each submesh colourised. There are a couple of tears in the mesh which I have yet to get to the bottom of but in general it proves the concept and demonstrates the process is probably sound.



To achieve this I created a new MeshToolApp. My Application base class looks like this
class Application
{
public: virtual int Execute()=0;

public: Application() {}
public: virtual ~Application() {}
};

So the main method looks something like this:

int main( int argc, char ** argv )
{
Application * MyApp = new MeshToolApplication();

int Result = 0;
if ( MyApp )
{
Result = MyApp->Execute();
}
return Result;
}

There is a little more to it in practice as I first parse the command line parameters to determine which application to execute, and there is some other init/cleanup, but this demonstrates the pattern very well.

The mesh tool app instantiates the same engine, renderer and input code as the main application but supplies an orbiting camera so I ca spin around the model to view it. It has simple import/load at the moment - once I'm happy enough with the post process I'll add a GUI to load, convert, view and save meshes. The last step will be creating a MeshConvertApplication that scans the content folder for LDR meshes and serializes out a converted mesh as an automated process.  It might render a screen capture of each mesh and save a bitmap for each one too.






C++ Test Framework


I had a couple of questions about my test framework, so I thought I'd come clean and show just how quickly you can get off the ground with a couple of macros in C++

With all of the code in Pioneer, my goal has been to do the minimum required to get the job done and my test framework is no exception. I rolled my own test framework as an exercise in understanding what a test framework does, is and should do. Needlessly reinventing the wheel is core to the development philosophy of pioneer and a great way to learn about systems.

I decided that I want to be able to tag tests at the end of the source file for the class they test, and that the syntax should be simple. I also wanted them to be easy to compile out so I knew macros were likely to be involved.

/////////////////////////////////
#include <unittest.h>
START_TEST( ExampleTest )
... test code ...
END_TEST


My minimum test for a class is a smoke test that instantiates an object and confirms that its non null. This is a sanity test for the constructor and checks for crashes, assertions, exceptions, errors etc.  I prefer to only use tested dependencies - test friendlys - and any other test on the object would require an instantiated object

/////////////////////////////////
#include <unittest.h>
START_TEST( World_SmokeTest )

{
WorldPtr world( new World() );
CheckNonNULL( world.get() );
}
CheckTrue( true );

END_TEST

This smoke test makes two checks, the first is that a world is instantiated without error and the second is that the object can be destroyed without error. I've used a scoped pointer, so this test just needs to report two successful checks and I'm happy. The default behaviour for my test log is to report the number of passed tests, the number of successful checks and a verbose list of all of the failures.

Within each test I can call on any of these Checks, which have so far been enough.
CheckNonNULL( a ) 
CheckNULL( a )
CheckEqual( a, b )
CheckNotEqual( a, b )
CheckTrue( a )
CheckFalse( a )

The START_TEST macro declares a class with the name of the test. It also registers it with a static instance of the test suite.  The test suite is the only global static object I use, and is compiled out in production code. Its the worst way I could configure the test runner except all of the other options.

With my START_TEST macro, the test case is registered and becomes part of the global test suite, which makes adding new tests super easy. There is no excuse for not adding a test after writing (most) public methods and its easy to write the test before the method too.

However, a few improvements could still be made. As the START_TEST macro declares a class with the test name, all test names have to be globally unique. This hasn't been a problem because I prefix the test name with the name of the class being tested but its a weakness in the system nonetheless.

I'd like to be able to declare tests per class. For example the test declaration might be:

START_TEST( World, SmokeTest )
... test code ...
END_TEST

So now I could report the classname of the system under test in the test report along with the test name, and potentially use it for test-coverage metrics as I could count the number of tested classes, tests per class and potentially add instrumentation for untested classes so I had a good idea of coverage.
I have an idea that a test-energy graph could be overlaid on a class graph so I could visualise test coverage as a heat map.

Secondly I might want a test suite to contain subsets of classes. Input/GUI tests, gameplay tests, network code tests, etc... This could be grouped with an ADD_TEST_TO_SUITE macro or with an extra parameter in START_TEST for the suite name.  I don't have a reason to run a subset of tests though - I really like having them all run every time and they are so fast that there is no cost to doing this. As soon as I fall into the practice of running a subset I might suffer from slow-to-execute test, or from breaking a test that I'm not running.
I've not got a good reason to add test suites, yet, but I've got a niggling feeling its a good idea.


Tuesday 16 July 2013

Mesh Smooooothing


Mesh Smoothing in general terms is pretty straightforward, but automating is a little harder.

I'm trying to automate my unify rather than manually specifying for each mesh. The compromise I've come up with is to record the bounds of the mesh, and compare to the bounds of the unified normal tips. Whichever normal orientation has the larger bounds for its tips must be the mesh that is the right way around.

My Mathematical solution was to determine if the normals are divergent or convergent, which kinda works for platonic solids but probably not for any of my meshes. I'd love to investigate this further as it feels like fun, but remains the wrong solution to the problem so I'll be leaving that stone unturned for a long time.

Checking the bounds should work for non enclosed meshes, which means I can split my airtank into several submeshes and resolve them individually with reasonable confidence they will all be the same way around when I recombine them.

I've also written some code I really don't like for this.  My CalculateFaceNormals(...) method takes one parameter - that it operates on via side effects and has no return value.  Likewise for my UnifyNormals method. They both take a collection of triangles. Now that its in blog form, its really apparent that they should be methods of the MeshResource class.

In short I think I really prefer
myMesh->CalculateFaceNormals();
to
CalculateFaceNormals( triangleCollection );

So in the spirit of being a Good Boy Scout I think I'll move those over before I leave the mesh code alone.  I've found it reasonable easy to read, even though I've not touched mesh code forever, and the new importer dropped straight in with no fuss.



As a consumer of the Mesh code, the class you deal with is a MeshFactory.  I use a MeshFactory to make MeshInstances for me, and put them in a MeshRenderComponent that I can attach to the scene graph.  

The MeshFactory gets MeshResources from a MeshDatabase.

All of the new import and postprocessing works behind the scenes of even the MeshDatabase. As long as the importer derives from MeshImporter which returns a MeshResource then I can keep developing new loaders and mesh processing and remain seamless to the rest of the application.

By moving my CalculateNormals() UnifyNormals() and AutoSmoothNormals() code to the MeshResource, I can apply it to MeshResources loaded from any file or even procedurally generated meshes. I'm temped to make them mesh processing components of the MeshResource because they don't apply to *every* mesh but for now I think adding them directly to the MeshResource is fine.

I'd *kind* of like to be able to apply them to MeshInstances so I could include them in a mesh view tool, and toggle each on and off but really that''s not relevant to the task so it can wait.






Sunday 14 July 2013

Gasping for air


So my LDR mesh importer is far from complete, but its a good start. If I want to build meshes at runtime suitable for GPU skinning there is still more work to do. I've broken the back of the mesh pipeline but do have a couple of bugs to ponder over before pushing too far ahead.

The LDR format isn't really designed with run-time performance and games in mind. They models are generally high detail - in many cases higher detail than I need but not without flaws in the geometry. Model 3838, the classic space Air Tank, is a good example of a well detailed model that diverges from games requirements - Its hollow. Being hollow is great for injection molding but not so great for realtime 3D. I could cap the ends and save the internal geometry. Actually, a lot of LDR models are brick-accurate in ways like this that are irrelevant to me.
Secondly, the AirTank is clearly several submeshes, and I'd rather have one mesh per element.

The LDR format doesn't specify a winding order - worse it specifies that winding order is irrelevant. However, I'm using counter-clockwise triangles so have had to write a Unify Normals mesh post-processor which isn't too much work.

Couple this with the knowledge that the meshes are not contiguous, which means you can't walk the geometry and unify normals. This is something of a shame, and demonstrated here.
The AirTank element is recognizably made from three or four non-contiguous sub-meshes.
The crossbar at the top, the mounting harness and the two cylinder tanks.




The closeup demonstrates that the mounting harness contains a mixture of CW and CCW triangles by visualizing the vertex normals as red lines.

The lovely attention to detail is otherwise well appreciated.  Its just that in this instance it gets in the way of progress.

My solution is to reduce the model into its submesh components and unify the surface normals per submesh and then concatenate into the original mesh before saving out as a serialized bytestream.
My quick-fix, since silicon is free and infinite, is to disable back face culling and ignore the winding order. The increase in rendering is insignificant in the small scenes that I'm using.

By not occluding back faces, I can move straight on to building a skeleton, rig and skin out of my minifig mesh parts.





Spaced Out

After taking a bunch of time off I thought I'd write some mesh tools.
This takes a vertex list, generates face normals and then uses the face normals to generate vertex normals.

In short, its a mesh-auto-smoothinator which will be a keystone in my content pipeline. The example mesh shows a side-by-side of auto-normals and then a smoothinated mesh with smooth areas and hard edges maintained.
(Also, this is a super-secret clue on what the next feature is.)



The LDR mesh importer loads LDR primitive objects. It throws away the line type data although I'm seriously considering processing this at a later date, since I'd love to have good wireframes.
The Mesh Import was reasonably quick at one 200 line class. Although there is an obvious extraction refactor to split the code up I found it was manageable enough at its size so have let it be.

The Mesh Post Processing was aroud 500 lines of code, and rammed into a single class with about four or five responsibilities.

  • Quantize the mesh so that similar vertices are identical
  • Build a tringle list from the vertex data
  • Unify normals by rewinding data
  • Calculate face normals
  • Autosmooth to generate vertex normals

The system relies on a dozen or so unit tests, although coverage isn't exhaustive.