Blender 2D Animation with Meshes


This is a follow on from the workflow discussed in the previous post: Preparing 2D Art for Animation.

This is the end result of the process described:

Sprightly Spring Deer

I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).

Comparing Unity Sprites to Blender Meshes in Unity

The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).

Sprite (left) and Mesh (right)
Night Time lighting affects the Blender mesh image but not the Sprite based image.
Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.

As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.

Transparency Shader

The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.

So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.

I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.

Importing the Images to Blender and Setting up the Workspace

Moving on to working in Blender with images and Meshes the basic process is this:

  1. For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
  2. A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
  3. The image is baked into the UV of the mesh.
  4. The components are then parented to an Armature with automatic weights.
  5. The meshes are weight painted to correct the deforms.
  6. Now it’s ready for animation.

The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.

All the Deer components Frankenstein’d together into a whole
The visibility of parts are toggled on and off so individual pieces can be worked on.

Making the Mesh’s

For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.

To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.

The foreleg Mesh

The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.

Vertices placement

After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.

Venison

In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.

Solid Mesh Planes

The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.

Rendered Meshes

The whole mesh ended up looking like this:

Armature and Weight Painting

As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.

Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).

Weights had to be carefully graded otherwise warping of the line would result:

The weight is too strong a transition here.
It causes artifacts like this.
This is the resulting gradient changes in weight to get a correctly deforming line.

The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode:
A few vertices on the chest were registered to the root bone. These all have to be manually removed.

The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.

The foot vertex group covers all these vertices.
Which you cannot tell in edit mode when you select it with “show weights”.
During transform in animation these black marks show where the mesh does not warp properly.
The mesh is a mess.
It’s because the shin bone weight doesn’t go all the way to the edge.
It looks right in edit mode.
But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).

These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.

Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.

The Shading

UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:

The Transparent Shader in Blender

That’s about it for getting everything set up in Blender. For more info on the animation steps and getting it into Unity see my other post about this. http://localhost/2021/11/16/exporting-multiple-animations-from-blender-to-unity/

,

Leave a Reply

Your email address will not be published. Required fields are marked *