Blender 2D Animation with Meshes

This is a follow on from the workflow discussed in the previous post: Preparing 2D Art for Animation.

This is the end result of the process described:

Sprightly Spring Deer

I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).

Comparing Unity Sprites to Blender Meshes in Unity

The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).

Sprite (left) and Mesh (right)
Night Time lighting affects the Blender mesh image but not the Sprite based image.
Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.

As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.

Transparency Shader

The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.

So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.

I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.

Importing the Images to Blender and Setting up the Workspace

Moving on to working in Blender with images and Meshes the basic process is this:

  1. For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
  2. A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
  3. The image is baked into the UV of the mesh.
  4. The components are then parented to an Armature with automatic weights.
  5. The meshes are weight painted to correct the deforms.
  6. Now it’s ready for animation.

The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.

All the Deer components Frankenstein’d together into a whole
The visibility of parts are toggled on and off so individual pieces can be worked on.

Making the Mesh’s

For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.

To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.

The foreleg Mesh

The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.

Vertices placement

After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.

Venison

In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.

Solid Mesh Planes

The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.

Rendered Meshes

The whole mesh ended up looking like this:

Armature and Weight Painting

As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.

Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).

Weights had to be carefully graded otherwise warping of the line would result:

The weight is too strong a transition here.
It causes artifacts like this.
This is the resulting gradient changes in weight to get a correctly deforming line.

The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode:
A few vertices on the chest were registered to the root bone. These all have to be manually removed.

The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.

The foot vertex group covers all these vertices.
Which you cannot tell in edit mode when you select it with “show weights”.
During transform in animation these black marks show where the mesh does not warp properly.
The mesh is a mess.
It’s because the shin bone weight doesn’t go all the way to the edge.
It looks right in edit mode.
But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).

These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.

Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.

The Shading

UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:

The Transparent Shader in Blender

That’s about it for getting everything set up in Blender. For more info on the animation steps and getting it into Unity see my other post about this. https://www.zuluonezero.net/2021/11/16/exporting-multiple-animations-from-blender-to-unity/

Exporting Multiple Animations from Blender to Unity

This is one of those workflows that is always a bit fiddly to get right so I’ve documented how to do it here in case I forget! One of the downsides to being a solo developer is that your skillset is always being stretched by the available time so you can end up getting proficient in once aspect of game building and then by the time you get back to that phase you forget everything you’ve learned and all the tricks of efficiency and process. Also, in case someone else needs it.

This is what we are aiming for in Unity. An imported mesh with multiple animations being called independently.

Blender Workflow for Saving the Animations

Start with a new project. Select everything (the default cube and lamp) x –> delete.

In this case I’ve imported an existing fbx of a hand with supporting armature ready for animation. I won’t go over the modelling or rigging procedure there is plenty of help with that out there – but if you need it I would recommend the Riven Phoenix courses because they are so dense (these tutorials are no quick start or tricks videos but deep deep dives into the process and reasons behind it and how stuff works in Blender at a very technical level).

This is how I layout Blender for Animation with a dual screen front and right view with the Timeline below

Get your animation window set up and make sure the timeline is available at the bottom.

Making a Pose Library

In the Outliner select the Armature and make a Pose Library.
We can use this to set a few basic poses to make the animation process run a little easier.
The poses will be major key frames that we can interpolate between.

It’s not the best workflow but in the tech preview for upcoming Blender versions is an enhanced workflow for the animation process which looks really exciting – google it.

Make a Pose Library

Add the default pose as the first item.
Go to Pose Mode. Get the model into your default position and save this pose. (Important – this will be the pose that the model is exported as by default so try and make it your idle or standing pose).

Save several other poses (make sure you save all the bones you want the pose to effect – usually this is all the bones).
You can overwrite poses if you get it wrong.

Also, when a pose is added and a pose marker is created the whole keying set is used to determine which bones to key. But if any bones are selected, only keyframes for those bones are added, otherwise all bones in the keying set are keyed (this is why I usually have all the bones selected).

I’ve made several poses and saved them

It’s a good idea to set and select the poses a few times for each one to make sure you got it right. I’ve found that sometimes it’s a bit glitchy or I do something a little bit wrong and it doesn’t save properly (actually it’s probably not glitchy it’s probably just me).

That Book icon with the Question Mark is useful when you have all your poses completed. Pose libraries are saved to Actions. They are not generally used as actions, but can be converted to and from them. If you use this icon to “sanitize” the pose library it drops all the poses down to an Action with one pose per frame. So you can go into the NLA Editor window and select this track and sweep/scrub through them. Maybe this is useful as a clip in Unity if you want to split it up using the timing editor and make custom animations in Unity (never tried it).

Making the Animations

Go to Dope Sheet – and switch to the Action Editor View.

Action Editor


Make the animation (ie. start on the first frame – Assign the pose from the library – Shift + I save rotation and location. Go to last frame – assign the next pose – Shift + I and save again).

In the Timeline make sure you are on the beginning frame. Set the pose you want to move from (first keyframe) and save the required parameters.

Shift – I
Insert Location and Rotation
(make sure the Armature is Selected)

Start with the first pose
The Dope Sheet

Move to the next frame at a suitable scale and change the pose to your ending pose in the editor. Save the Location and Rotation parameters (if that’s all that’s changed).

Add the second pose
Saved Pose in the Dope Sheet

Pushing the Animation down the Action Stack

Once you are done hit the “Push Down” button. This is the magic button.

Magic Push Down Button

Next move over the the Nonlinear Animation Window.

The NLA Window

Your animations get made as Actions in the Non-Linear Action Editor Window: NlaTrack, NlaTrack.001, etc.

In the NLA Editor you can click the Star next to the NLA Track (rename them to make it friendlier) to scrub through the track. Make sure you got the right animation under the right name etc.

After hit Push Down after each animation is finished it appears as an NLA Track in the NLA Editor

I make a few more animations and hey presto. Each one of those NlaTracks is an animation that we can use in Unity. Also the PoseLib track is marked there with orange lines – one for each pose on a frame which is a good reference track if you need it.

The Animations Stacked up in the NLA ready for Export with the *.fbx

Export from Blender

These are the settings I use to export. It’s safer to manually select only the Armature and the Mesh here.

It’s useful to have Forward as -Z Forward for Unity.

Blender Export Settings

Import Into Unity

This is what it looks like when I import the .fbx into Unity.

The Animation Tab of the Asset (on import)

The animations come out as duplicates but you only need one set. Work out which one’s you want and delete the others using the minus button when you import. This bit can be a bit fiddly and sometimes I’ve had to do the process of exporting and importing a couple of times to get it to work. Sometimes what works is to drag and drop all your animations NLA Tracks into one track in the NLA Editor and select it with the star before exporting. Sometimes it works – sometimes not. Not sure why.

After that I drag the model into the scene and add an animation controller. Then you can just drag the animations from the imported model into the Animator window like below and set up transitions as you see fit. Below I’ve made them all come from an Any State and added some Triggers so I can play with them in the Window for Testing.

You can see the result of that testing in the .gif at the top of the article. (Apologies for the quality of that .gif it seems to have picked up some ghosting artifacts around the fingers – promise it looks awesome on the screen).

The Animator Controller

So there are a few limitations to this workflow that need to be mentioned. Some people like to save their whole .blend file into their Unity Assets so they can make updates on the fly while modelling etc. This won’t work with that set up. The animations need to be saved down to a *.fbx file so that Unity can find them when it’s imported as an asset. So if you like to have access to your .blend and use animations like this you need to export the *.fbx and import it again and have both .blend and .fbx in your asset folders which can be a bit confusing and messy and makes for a bigger project.

BYE

Blender – Custom Bevels

This week for my game called The Gap I’ve been doing some house components. The game is set in the suburbs and there will be a lot of house interiors. In what I envision to be the opening scene of the game your main character wakes in his or her bedroom alive with excitement and ready to explore the world. Which has little to do with cornices but it’s these details that make the game look like a world and adds to the immersion.

So here is an example of a cornice that I have made in Blender using the Bevel Modifier.

The Cornice using a Vector Group and Custom Bevel

This starts as a cube with about a quarter cut out of the form and a reduced height. But any straight edge will do. First use the edge select to assign all the relevant parts you want to bevel into a Vector Group. (Select the edge and in the object data panel use the “assign” button under the Vector Group section). Then you add the Bevel Modifier and as you can see in the image above you use the Limit Method by Vector Group and then add your new Vector Group you just assigned. Choose Custom Bevel and use either one of the presets or use the graphing tool window at the bottom to create your own bevel shape. There are a couple of nice cornice style presets in that section to get you started. (It helps to play round with the Amount and Segments section at the top to improve the look – also try Smooth Shading on the object).

Below is a Window frame I used the same technique on but with a different profile. I used ctrl+R to make some loop cuts around the inside of my window frame and used the edge select to assign the cuts to a Vector Group. Then again I used a Bevel Modifier to make a custom set of ridges around the window frame to make it look like the fittings of the aluminium framing around the glazing of the window.

Custom Bevel on a Loop Cut to make Window Mouldings

You can see I also added a normal Bevel Modifier to the edge of Window Sill to create the outer wall shape where the rain runs away from the window frame.

I did a quick mock up in Unity to see how they performed under the often difficult lighting environment of the game engine. As you can see below by the shadows on the walls and spot light on the floor it’s not an ideal set up and the cornices are looking pretty crap. At the top of the room on the back wall I have two longer cornice pieces and a corner piece is visible as well. The two at the back are picking up different light sources and one looks lighter than the other. They are clearly three separate objects and not one nice clean line. It might help to add some more surface geometry to the longer pieces so they are not just one long face (loop cuts or subdivisions). Alternatively I could measure them out to fit the room exactly and meet in the corners – but then I would have to make multiple lengths instead of a more modular re-usable approach.

Cornices under lighting conditions in Unity

Character Design

The other thing I did this week that was actually productive is some more character designs. This one is Hippo Boy catching a football and a couple of different shading styles (still playing with Clip Studio and still enjoying it). Next up I’m thinking about all the ways I can program a mugby style football game.

Hippo Boy
Pop Style Hippo Boy

Getting a Foot in the Door of Game Design

First of all – sorry about the misleading title – this post is about getting the doors working in the Endless Elevator game that we are currently developing. I thought it was a good pun and that as this post is all about our development process that it wasn’t too bad. The only career advice I got is just to start making games…any games.

You’d think making a door that opens and closes would be a pretty simple thing to do but surprisingly it wasn’t.

I designed and built some 3D models of building components in MagicaVoxel and exported them as .obj files. I use MagicaVoxel because it’s free and really quick and simple to use. When I model components I can be sure that they are all exactly the same scale and that all the different bits (no matter when I made them) are going to fit together without any hassle. But the models are not optimised for game engines as they have a reasonably high poly count due to the modelling process within the program. Most of the models come out with heaps of unnecessary triangles that need to be managed first. In most cases I will import the object into Blender and use the “decimate” modifier on the planes (with an angle of about 20 if you are interested). In the case of the door object it was pretty simple, it is just a door after all, and I didn’t need to optimise it.

Here is what the door looks like in MagicaVoxel:

Notice that the door object is sitting just forward of the enclosing frame and that when exporting the object the center is at the middle bottom plane of that frame. The door is off center because it’s modelled to fit exactly in a doorway that sits exactly in that position within the frame. This saves me a huge amount of time lining everything up when it gets to Unity as the position of the door (or any other object for that matter) is already in the right spot. The problem is that the point of origin for the door is in the wrong spot. It exports a few units behind the door and on the floor! This becomes important when you try and rotate that object (like you would when opening and closing a door) and the pivot point is not where the hinges should be.

To fix this I had to import the .obj file into Blender and reposition the point of origin for the model.

This is what it looks like in Blender when I did this:

To do it I selected the edge that I wanted the door to pivot on when opened.

Then in Edit Mode:
Mesh -> Snap -> Curser to Selected

In Object Mode:
Object -> Transform -> Origin to 3D Curser

So that puts the curser onto the correct position in the middle of that edge where the hinges would be and resets the point of origin (which is where it will pivot when rotated in Unity) to the right spot.

Once we were all imported into Unity and looking good I set up a prefab for the “Doorway” object with the door as a child object. The doorway object has a bunch of colliders to stop the player walking through the walls and a big sphere collider where the door would be to trigger the open / close function when the player walks into it.

This is what the doorway looks like in Unity:

Next I scripted up a few options for opening the door. I’ll post the script at the end of this article but basically there were three options of opening the door that I wanted to test. (Actually I tried it about six ways but whittled it down to the most simple methods – and just as an aside there is an actual “hinge” object type in Unity if you ever need it).

This is how the script looks in the editor:

Notice the slider at the bottom to control how far I want the door to open. It’s really handy to have this when playing in the editor and getting your settings right. If you want to know more about using it see this post

The three tick boxes are for testing the three different ways of opening the door.

Snappy was a quick simple transform of the position from open to closed with no “in betweening”. It looks a bit unnatural as the door magically goes from closed to open but it’s not too bad and most people are pretty used to similar behaviour in video games.

The active line in the code is:
the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);

The next method works more like a door should. In that it swings open and closed the whole way in a steady fashion. But the big problem with this method was that it was OK when the character was going into the doorway and with the swing of the door but when it was time to come back out of the doorway the door was in the way. There was not enough room to trigger the door opening from the other side without being hit by the door as it opened. Plus if the player enters the collider from the wrong trajectory the character gets pushed through a wall by the swinging door which is sub-optimal. I called this method InTheWay!

The active line here is:
the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);

In an effort to combat this I chose to do a hybrid method that opened the door to a point that wouldn’t hit the player and then do the magic transform to get all the way open. I call this one aBitBoth. It looks a little weird too. Like there is an angry fairy pulling the door closed with a snap after the character enters.

Here are all three to compare.

Snappy

In The Way

A Bit of Both

I’m not too sure which one I’m going to use at this stage as the Snappy method works best for now but I like the In The Way method better. I looks more normal and I like that you have to wait just a few milliseconds for the door to swing (adds tension when you are in a hurry to escape a bullet in the back). I could do something like halt the player movement from the rear of the door when it triggers to open from the closed side or maybe play around with the radius of the sphere. Neither solutions seem like great ideas to me right now but something like that will need to be done if I’m going to use that method. Maybe I could just have the door swing both ways and open to the outside when he is behind it but that’s probably a bit weird for a hotel door.

Here is that script that I was testing with:

using UnityEngine;

public class OpenDoor : MonoBehaviour {

    public bool openMe;
    public GameObject the_door;
    public bool snappy;
    public bool inTheWay;
    public bool aBitBoth;
    public Vector3 targetPositionOpen;
    public Vector3 targetPositionClosed;

    [Range(0F, 180F)]
    public float turningOpen;

    void Start ()
    {
        targetPositionClosed = new Vector3(0f, 180f, 0f);
        targetPositionOpen = new Vector3(0f, turningOpen, 0f);
    }

    void Update()
    {

        if (openMe)
        {
            OpenMe();
        }
        else
        {
            CloseMe();
        }

    }

    private void OpenMe()
    {

        if (inTheWay)
        {
            if (the_door.transform.rotation.eulerAngles.y > turningOpen)
            {
                the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
            }
        }

        if (snappy)
        {
            the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
        }

        if (aBitBoth)
        {
            if (the_door.transform.rotation.eulerAngles.y > turningOpen)  // 144f
            {
                the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
            }
            else
            {
                the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
            }

        }

    }

    private void CloseMe()
    {
        if (inTheWay)
        {
            if (the_door.transform.rotation.eulerAngles.y <= 180)
            {
                the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
            }
        }

        if (snappy)
        {
            the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
        }

        if (aBitBoth)
        {
            if (the_door.transform.rotation.eulerAngles.y <= turningOpen)  // 144f
            {
                the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
            }
            else
            {
                the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
            }
        }
    }

        void OnTriggerEnter(Collider col)
    {
        string colName = col.gameObject.name;
        Debug.Log("Triggered OpenDoor!!! : " + colName);

        if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
        {
            openMe = true;
        }
    }

    void OnTriggerExit(Collider col)
    {
        if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
        {
            openMe = false;
        }
    }

}

The Five Games in Ten Weeks Challenge!

Some months back I went to a local presentation by the Unity champions at The Arcade (a collaborative workspace specifically for game developers and creatives) in Melbourne.  It was one of those “learn about others and gain insight into one’s self” moments.  One thing I picked up from one of the presenters from Hipster Whale (makers of Crossy Road) was that when they were developing a new game they tried to keep the proof of concept to a two week period.  Some time later with this in mind I presented to myself the “Five Games in Ten Weeks Challenge!” to see if I could really push out five different games in ten weeks.

Five Games in Ten Weeks

Continue reading