Unity: Networking Does Not Work in the Editor

This is an embarrassing post. Some days are there to simply remind you that you don’t really know what you are doing. I spent two days trying to track down why my networking scripts were not working when I ran them in the editor. Turns out that the editor makes it’s own internal network stack when running a project which does not connect to the normal ethernet ports on your machine.

Top Tip ! Build you project and run it natively if you are using TCP/IP connections ! Don’t try and run it in the editor (not even for quick checks or small simple projects) it simply won’t work.

The really nice thing is after I got it all working I built a touch screen controller for Android phones that can be used as input in a Unity game.

The code is in the repo: https://github.com/zuluonezero/AndroidTouchController

Unity: High CPU on Small Projects

Quick Tip: I have been working on a TCP/IP Networking project using a client/server architecture. The client (and the server for that matter) are both relatively small code bases and the UI and object count are really low in the scene. I had been struggling with CPU load in the project and feverishly trying to work out why my code was baking the CPU (and GPU!). I’d assumed it was something stupid I had done in a loop with the networking structures I was not that familiar with. It’s really not easy to concentrate on new code when your laptop fan is literally screaming at you! I’d hit Play and the CPU would spike almost immediately. So I would switch to my local terminal and scrape through the open ports and network connections looking for a smoking gun. Turns out it was the default frame rate in the Editor trying to deliver the fastest graphics performance it could on my PC – and with such a low object count and and very simple graphics being asked for it was running like a Formula One race car when all I wanted was an old jalopy.

This is my CPU on Speed

Solution: Set Target Frame Rate!

A Unity project will attempt to run your project as fast as possible. Frames will be rendered as quickly as they can (limited by your display device’s refresh rate).

There are two ways to control frame rate:

Application.targetFrameRate – controls the frame rate by specifying the number of frames your game tries to render per second. (I wrote a script to use this – see below).

QualitySettings.vSyncCount – specifies the number of screen refreshes to allow between frames. (look for it in the Editor Settings). For a 60Hz display, setting vSyncCount=2 will cause Unity to render at 30fps in sync with the display.

Note that mobile platforms ignore QualitySettings.vSyncCount and use Application.targetFrameRate to control the frame rate.

The default value of Application.targetFrameRate is -1. (the platform’s default target frame rate)

I set mine using the script to 20 and when I hit Play got this result:

This is my CPU chilling out
using UnityEngine;

public class SetFrameRate : MonoBehaviour
{		
		[SerializeField]	// Just so you can check it in the inspector
		private int FrameRate = 20; // 20 is really low but got my CPU down to < 10% - 30 is the target for mobile and was < 20% CPU usage
		//private int FrameRate = -1; // reset to default
	
		private void Awake()
		{
			Application.targetFrameRate = FrameRate;
		}
}

I attached it to my Camera object.

Set Frame Rate

One interesting behavior of setting this using a script in Unity 2020.3.26f1 was that once it was attached to the Camera object and Play was initiated for the first time it must have set the frame rate somewhere internally in the Engine. When I removed the script (for testing) the frame rate did not automatically reset to -1. I had to re-attach the script and update it to set the frame rate back to the default. I had a search of the settings in the Inspector and Preferences and couldn’t find a visible reference to it anywhere so you have to be careful if you are going to put this on a Production build that you reset it before releasing otherwise you might end up with a lower frame rate than what the platform could achieve by default.

Enough procrastination – back to sockets, ports and buffers.

Blender 2D Animation with Meshes

This is a follow on from the workflow discussed in the previous post: Preparing 2D Art for Animation.

This is the end result of the process described:

Sprightly Spring Deer

I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).

Comparing Unity Sprites to Blender Meshes in Unity

The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).

Sprite (left) and Mesh (right)
Night Time lighting affects the Blender mesh image but not the Sprite based image.
Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.

As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.

Transparency Shader

The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.

So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.

I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.

Importing the Images to Blender and Setting up the Workspace

Moving on to working in Blender with images and Meshes the basic process is this:

  1. For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
  2. A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
  3. The image is baked into the UV of the mesh.
  4. The components are then parented to an Armature with automatic weights.
  5. The meshes are weight painted to correct the deforms.
  6. Now it’s ready for animation.

The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.

All the Deer components Frankenstein’d together into a whole
The visibility of parts are toggled on and off so individual pieces can be worked on.

Making the Mesh’s

For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.

To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.

The foreleg Mesh

The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.

Vertices placement

After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.

Venison

In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.

Solid Mesh Planes

The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.

Rendered Meshes

The whole mesh ended up looking like this:

Armature and Weight Painting

As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.

Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).

Weights had to be carefully graded otherwise warping of the line would result:

The weight is too strong a transition here.
It causes artifacts like this.
This is the resulting gradient changes in weight to get a correctly deforming line.

The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode:
A few vertices on the chest were registered to the root bone. These all have to be manually removed.

The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.

The foot vertex group covers all these vertices.
Which you cannot tell in edit mode when you select it with “show weights”.
During transform in animation these black marks show where the mesh does not warp properly.
The mesh is a mess.
It’s because the shin bone weight doesn’t go all the way to the edge.
It looks right in edit mode.
But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).

These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.

Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.

The Shading

UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:

The Transparent Shader in Blender

That’s about it for getting everything set up in Blender. For more info on the animation steps and getting it into Unity see my other post about this. https://www.zuluonezero.net/2021/11/16/exporting-multiple-animations-from-blender-to-unity/

Preparing 2D Art for Animation

I’ve been doing some work on the 2D side of things in preparation for another game.

This has been the general workflow.

1. Make the assets in Clip Studio.

2. Pack the sprites with Free-Tex-Packer

3. Import the art into Blender, make a mesh for each sprite and UV map it.

4. Add the Armature bones.

5. Weight Paint

6. Animate.

7. Export from Blender as an *.fbx with the animations baked.

8. Import into Unity

9. Add new Materials and import the UV images into Unity.

10. Add the *.fbx imported asset into a scene.

11. Add an Animator Component and drag the animations from the prefab into it.

12. Set up Triggers and connections for the animations.

It’s a lot of work. Especially if you make a custom mesh for each piece of art. But I did all this as I really like animating in Blender (especially now that the Pose Library is functional and part of the Asset Browser). But to tell the truth I think I got better results using the Spline system in Unity 3D with much less work. There are trade offs and I’ll go through them below after more exposition on the workflow.

In this post I’ll go through the asset creation process in Clip Studio.

Making the Asset

The 2D game has a bunch of cute animals so I dug deep into the Disney Sketchbook by Ken Shue and pulled out Bambi for inspiration.

An early Disney sketch

Using this as a rough guide I drafted a few basic shapes for a “Deer” character which looked like this:

Rough Sketch for the 2D Asset

I started using Clip Studio last year in place of the Gimp. I’ve tried all sorts of painting programs and would choose Gimp over most of them (I will not spring for a paid version of Photoshop – it’s extortion!) but Clip Studio won me over with it’s brushes. It’s not expensive by comparison and I really like how it fits into the specific things I want out of an art program. I’ll often go back to Gimp for projects that require a lot of filters and image manipulation but for straight drawing on the PC Clip Studio is a good fit for me. I like how you can make custom tools that mimic your real life counterparts for a pencil or brush and find this program better at it than most (though Adobe Sketchbook runs a close second).

To start with I create a set of layers for the Inking of the artwork. One for each moving element in the final asset.

There is a pretty simple formula for this where each limb or piece gets a layer. But you have to have a general idea of what you are going to need in the final asset and what animation is required. There is no point doing a separate component if it’s not going to move or be seen in the final product. Trouble is a lot of this work is iterative and often you find that you have to go back and change something when it doesn’t look right. There needs to be an awareness of where pieces overlap and what lines are going to be warped by the armature bending or where a line needs to be extended behind a piece that might move and reveal where it ends by another layer.

The Inking Layers
This is how the inking layers sit on top of each other that shows where lines overlap or extend.

It’s really easy to see on the body and legs but even here on the pieces surrounding the head the lines that make up the ears and hair and neck all have to move independently but still look connected.

Once I’m done with the inking stage I add more layers for color. At this point the whole file gets saved as export copy and the layers are merged into one for each piece again and numbered in the order in which they will sit on the animation cell. I keep the older copy with the separate layers for everything and all the drafts so I can go back to it if I have to change something.
This is the whole asset complete and ready for export. Each layer is exported individually as a *.png. The file size of each one is 1024 x 1024 pixels with 600 dpi and a transparent background.
The *.png files are imported into the Texture Packer to minimize the material size in the final project. Each of these elements get’s UV mapped to a mesh in Blender but more of that later in the next post.

Start to finish this took a couple of days elapsed time as there is a lot of noodling about with formats, designs and what-not.

Next up I’ll go into the Blender workflow and preparing the art for animation with complex and simple meshes.

Rust on RHEL WSL2

I’ve been doing a lot of back end investigation the past few months – trying to build something to support future projects. Most of it’s just random poking about with new technologies and different ways of doing “stuff” but then sometimes while doing this you find something that is really cool and resonates with the way your brain works. For me this month it was Rust and RHEL WSL.

I got here by a round about route of looking at golang programming and docker integration for back end processing of a game. A way to offload non-game-critical systems to other processors. Like a high score system that keeps player profiles and scores online which can be called from within the game when needed. I know there are heaps of good services out there that do this – but I like poking into stuff and having that level of control.

The other thing is – sometimes stuff just makes sense. I’ve been a long time linux user and champion but have always been locked into a Wintel desktop in the studio due to the support needs of my audio kit. (I have a Line 6 KB37 which integrates the midi keyboard and guitar effects pedals into one unit – it’s freaking awesome and I’m terrified that one day it will break and be out of support). I’d run linux as a virtual machine and used cygwin and my favourite mobaXterm as a solution to this but while poking around within the docker community I came across WSL. The Windows Subsystem for Linux.

WSL

The Windows Subsystem for Linux is a compatibility layer which runs Linux binary executables natively on Windows. WSL v2 has a real Linux kernel (though there are some storage issues if you need more info read the docs). The default linux kernel is Ubuntu which I’m OK with – one of my favourite linux distributions is the Ubuntu Studio but I’ve been a Red Hat admin for a long time so am more comfortable there but there was no RHEL or CentOS supported platform. But then I found the RHWSL project by a Japanese Developer called Yosuke Sano and I was intrigued and immediately hooked.

Basically this is how I set up WSL on my Windows 10 Workstation.

C:\Users\zulu>wsl –list –online
The following is a list of valid distributions that can be installed.
Install using ‘wsl –install -d ‘.

C:\Users\zulu>WSL –list –all
Windows Subsystem for Linux Distributions:
Ubuntu (Default)

NAME FRIENDLY NAME
Ubuntu Ubuntu
Debian Debian GNU/Linux
kali-linux Kali Linux Rolling
openSUSE-42 openSUSE Leap 42
SLES-12 SUSE Linux Enterprise Server v12
Ubuntu-16.04 Ubuntu 16.04 LTS
Ubuntu-18.04 Ubuntu 18.04 LTS
Ubuntu-20.04 Ubuntu 20.04 LTS

C:\Users\zulu>wsl –set-default-version 2
For information on key differences with WSL 2 please visit https://aka.ms/wsl2
The operation completed successfully.

C:\Users\zulu>WSL –HELP
Invalid command line option: –HELP
Copyright (c) Microsoft Corporation. All rights reserved.
Usage: wsl.exe [Argument] [Options…] [CommandLine]

I downloaded the RHWSL package from git, extracted it and run the executable to register the package with WSL and install the root file system and that was it.

This is how I set the RHWSL distribution as my default:

d:\RedHat\RHWSL>wsl -d RHWSL

Here are a bunch of useful links if you want to explore more:

https://docs.docker.com/desktop/windows/wsl/
https://docs.microsoft.com/en-us/windows/wsl/install
https://docs.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers
https://dev.to/bowmanjd/using-podman-on-windows-subsystem-for-linux-wsl-58ji

https://github.com/yosukes-dev/RHWSL
https://github.com/yosukes-dev/RHWSL/releases

If you want more info on WSL yosukes-dev has an “Awesome” resource list.

Rust

I started looking at Rust as part of a wider investigation into “modern” programming languages. I spent a few weeks looking at go (golang) and as much as it was a great multi-purpose language and super easy to start using I was a little gobsmacked at how large the binaries were (they are statically compiled) and being someone who is always on tiny machines I kept getting a niggling feeling that a whole system of go programs would be a big chunk of disk on a small device doing work that that might be easier done slower with a simple shell script (I exaggerate!). Anyway in a lot of the stuff I read golang and rust were comparable. I will come back to go as the community and contributions seemed most excellent.

Plus Rust made sense to me in places where Go didn’t. I really don’t have a logical excuse here or a well reasoned argument. Sometimes you just like a language cause it “feels” right. I had the same thing with Ruby. Anyway this is how I got started with Rust on the RHWSL…

https://www.rust-lang.org/learn/get-started
If you’re a Windows Subsystem for Linux user run the following in your terminal, then follow the on-screen instructions to install Rust.

curl –proto ‘=https’ –tlsv1.2 -sSf https://sh.rustup.rs | sh

I started following the hello world tutorial to set up the system and it wasn’t all plain sailing. For one I probably should have done the install as a normal user but after mucking around with environment variables and permissions into the root user directories it was just easier to do the work as root. (I know rolling over in my grave). So it wasn’t all plain sailing – and once I got to the compile stage there were a few basic compiling tools that the RHWSL needed. This is how it went:

https://doc.rust-lang.org/cargo/getting-started/first-steps.html

[root@Venom RHWSL]# mkdir rustProgramming
[root@Venom RHWSL]# cd rustProgramming/
[root@Venom rustProgramming]# cargo new hello_world
Created binary (application) hello_world package
[root@Venom rustProgramming]#
[root@Venom rustProgramming]# ls -ltr
total 0
drwxrwxrwx 1 root root 4096 Feb 8 18:27 hello_world
[root@Venom rustProgramming]#
[root@Venom rustProgramming]# cd hello_world/
[root@Venom hello_world]# ls -ltr
total 0
-rwxrwxrwx 1 root root 180 Feb 8 18:27 Cargo.toml
drwxrwxrwx 1 root root 4096 Feb 8 18:27 src

[root@Venom hello_world]# cat Cargo.toml
[package]
name = “hello_world”
version = “0.1.0”
edition = “2021”
See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[root@Venom hello_world]# cat src/main.rs
fn main() {
println!(“Hello, world!”);
}

[root@Venom hello_world]# cargo build
Compiling hello_world v0.1.0 (/mnt/d/RedHat/RHWSL/rustProgramming/hello_world)
error: linker cc not found
|
= note: No such file or directory (os error 2)
error: could not compile hello_world due to previous error

Bingo – first problem – no compiler. I installed make and gcc – not really sure if I needed make but figured it would be a nice to have anyway.

[root@Venom hello_world]# which make
/usr/bin/which: no make in (/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath:/mnt/c/Program Files (x86)/ ……….lots more here

[root@Venom hello_world]# which cc
/usr/bin/which: no cc in (/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath:/mnt/c/Program Files (x86)/ ……….lots more here

[root@Venom hello_world]# yum install make
…Dependencies resolved.
Package Architecture Version Repository Size
Installing:
make x86_64 1:4.2.1-10.el8 ubi-8-baseos 498 k
…. more in here you don’t need to see …
Installed:
make-1:4.2.1-10.el8.x86_64
Complete!

[root@Venom hello_world]# which make
/usr/bin/make

[root@Venom hello_world]# yum install gcc
…Dependencies resolved
…. more in here you don’t need to see …
Installing dependencies:
binutils x86_64 2.30-108.el8_5.1 ubi-8-baseos 5.8 M
cpp x86_64 8.5.0-4.el8_5 ubi-8-appstream 10 M
glibc-devel x86_64 2.28-164.el8 ubi-8-baseos 1.0 M
glibc-headers x86_64 2.28-164.el8 ubi-8-baseos 480 k
…Transaction Summary
Install 14 Packages
Total download size: 51 M
Installed size: 123 M
Installed:
Complete!

[root@Venom hello_world]# which cc
/usr/bin/cc

[root@Venom hello_world]# cargo build
Compiling hello_world v0.1.0 (/mnt/d/RedHat/RHWSL/rustProgramming/hello_world)
Finished dev [unoptimized + debuginfo] target(s) in 2.09s

[root@Venom hello_world]# ls -ltr
total 0
-rwxrwxrwx 1 root root 180 Feb 8 18:27 Cargo.toml
drwxrwxrwx 1 root root 4096 Feb 8 18:27 src
-rwxrwxrwx 1 root root 155 Feb 8 18:28 Cargo.lock
drwxrwxrwx 1 root root 4096 Feb 8 18:28 target

You can run the executable like this:

[root@Venom hello_world]# ./target/debug/hello_world
Hello, world!

Or just run from command:

[root@Venom hello_world]# cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.06s
Running target/debug/hello_world
Hello, world!

Yay !

Exporting Multiple Animations from Blender to Unity

This is one of those workflows that is always a bit fiddly to get right so I’ve documented how to do it here in case I forget! One of the downsides to being a solo developer is that your skillset is always being stretched by the available time so you can end up getting proficient in once aspect of game building and then by the time you get back to that phase you forget everything you’ve learned and all the tricks of efficiency and process. Also, in case someone else needs it.

This is what we are aiming for in Unity. An imported mesh with multiple animations being called independently.

Blender Workflow for Saving the Animations

Start with a new project. Select everything (the default cube and lamp) x –> delete.

In this case I’ve imported an existing fbx of a hand with supporting armature ready for animation. I won’t go over the modelling or rigging procedure there is plenty of help with that out there – but if you need it I would recommend the Riven Phoenix courses because they are so dense (these tutorials are no quick start or tricks videos but deep deep dives into the process and reasons behind it and how stuff works in Blender at a very technical level).

This is how I layout Blender for Animation with a dual screen front and right view with the Timeline below

Get your animation window set up and make sure the timeline is available at the bottom.

Making a Pose Library

In the Outliner select the Armature and make a Pose Library.
We can use this to set a few basic poses to make the animation process run a little easier.
The poses will be major key frames that we can interpolate between.

It’s not the best workflow but in the tech preview for upcoming Blender versions is an enhanced workflow for the animation process which looks really exciting – google it.

Make a Pose Library

Add the default pose as the first item.
Go to Pose Mode. Get the model into your default position and save this pose. (Important – this will be the pose that the model is exported as by default so try and make it your idle or standing pose).

Save several other poses (make sure you save all the bones you want the pose to effect – usually this is all the bones).
You can overwrite poses if you get it wrong.

Also, when a pose is added and a pose marker is created the whole keying set is used to determine which bones to key. But if any bones are selected, only keyframes for those bones are added, otherwise all bones in the keying set are keyed (this is why I usually have all the bones selected).

I’ve made several poses and saved them

It’s a good idea to set and select the poses a few times for each one to make sure you got it right. I’ve found that sometimes it’s a bit glitchy or I do something a little bit wrong and it doesn’t save properly (actually it’s probably not glitchy it’s probably just me).

That Book icon with the Question Mark is useful when you have all your poses completed. Pose libraries are saved to Actions. They are not generally used as actions, but can be converted to and from them. If you use this icon to “sanitize” the pose library it drops all the poses down to an Action with one pose per frame. So you can go into the NLA Editor window and select this track and sweep/scrub through them. Maybe this is useful as a clip in Unity if you want to split it up using the timing editor and make custom animations in Unity (never tried it).

Making the Animations

Go to Dope Sheet – and switch to the Action Editor View.

Action Editor


Make the animation (ie. start on the first frame – Assign the pose from the library – Shift + I save rotation and location. Go to last frame – assign the next pose – Shift + I and save again).

In the Timeline make sure you are on the beginning frame. Set the pose you want to move from (first keyframe) and save the required parameters.

Shift – I
Insert Location and Rotation
(make sure the Armature is Selected)

Start with the first pose
The Dope Sheet

Move to the next frame at a suitable scale and change the pose to your ending pose in the editor. Save the Location and Rotation parameters (if that’s all that’s changed).

Add the second pose
Saved Pose in the Dope Sheet

Pushing the Animation down the Action Stack

Once you are done hit the “Push Down” button. This is the magic button.

Magic Push Down Button

Next move over the the Nonlinear Animation Window.

The NLA Window

Your animations get made as Actions in the Non-Linear Action Editor Window: NlaTrack, NlaTrack.001, etc.

In the NLA Editor you can click the Star next to the NLA Track (rename them to make it friendlier) to scrub through the track. Make sure you got the right animation under the right name etc.

After hit Push Down after each animation is finished it appears as an NLA Track in the NLA Editor

I make a few more animations and hey presto. Each one of those NlaTracks is an animation that we can use in Unity. Also the PoseLib track is marked there with orange lines – one for each pose on a frame which is a good reference track if you need it.

The Animations Stacked up in the NLA ready for Export with the *.fbx

Export from Blender

These are the settings I use to export. It’s safer to manually select only the Armature and the Mesh here.

It’s useful to have Forward as -Z Forward for Unity.

Blender Export Settings

Import Into Unity

This is what it looks like when I import the .fbx into Unity.

The Animation Tab of the Asset (on import)

The animations come out as duplicates but you only need one set. Work out which one’s you want and delete the others using the minus button when you import. This bit can be a bit fiddly and sometimes I’ve had to do the process of exporting and importing a couple of times to get it to work. Sometimes what works is to drag and drop all your animations NLA Tracks into one track in the NLA Editor and select it with the star before exporting. Sometimes it works – sometimes not. Not sure why.

After that I drag the model into the scene and add an animation controller. Then you can just drag the animations from the imported model into the Animator window like below and set up transitions as you see fit. Below I’ve made them all come from an Any State and added some Triggers so I can play with them in the Window for Testing.

You can see the result of that testing in the .gif at the top of the article. (Apologies for the quality of that .gif it seems to have picked up some ghosting artifacts around the fingers – promise it looks awesome on the screen).

The Animator Controller

So there are a few limitations to this workflow that need to be mentioned. Some people like to save their whole .blend file into their Unity Assets so they can make updates on the fly while modelling etc. This won’t work with that set up. The animations need to be saved down to a *.fbx file so that Unity can find them when it’s imported as an asset. So if you like to have access to your .blend and use animations like this you need to export the *.fbx and import it again and have both .blend and .fbx in your asset folders which can be a bit confusing and messy and makes for a bigger project.

BYE

Blender Curved Cuts

For the latest game in development called The Gap I’ve been doing lots of work in Blender. Learning new tricks and speeding up my workflow. Making a curved cut is something I had not done a lot of and in the past I’ve struggled with the knife tool or knife projects. I’ve been using this method to make curved windows and door arches.

Here is a neat way to make a curved cut in an object with considerable control and only a little bit of mucking about.

Basically we are intersecting one mesh with another and bisecting the meshes with a knife cut to make a simple train tunnel shape.

We are going to start with the default cube which has been scaled up to ten and subdivided 7 times.

A subdivided cube

Next we are going to add a Nurbs Curve as a second object. This we are going to manipulate into the shape we want and later convert it to a mesh to intersect with our cube. You could use any other object like a Bezier Curve or a Path would be very flexible. I chose a Nurbs Curve because it was reasonably simple shape.

Add a Nurbs Curve

In Edit mode manipulate the curve into the shape you want, subdividing the points if required. I find that enabling snapping is a good way to start to keep everything the same on both sides of the curve and then turning it off for fine tuning.

Snapping Enabled and Handles

Once you have the shape you want you can clean up the curve (it’s best to have everything on the same plane).

The Curve is Ready for Converting to a Mesh

Once you are happy with it it is now converted to a mesh.

Convert to Mesh

Which is then extended through our cube!

Extended Intersecting Mesh

Both Meshes are Joined (Ctrl + J). Select the one last that you want to keep the name of.

Joined Meshes into one Object

Switch to Face Select mode and then the Mesh is Intersected and cut with the knife:

Then you can simply delete the faces from the Nurbs Curve Mesh and you are left with the nice curved cut.

Select Faces on one side of the cut and Delete
Delete the Faces on the other side of the Cut

Now you can select all the faces inside your nice curve and extrude them into the cube to make a train tunnel 🙂

Face Select
Extrude into the Cube
The Completed Curved Cut

A few final notes. As I mentioned before you can do this procedure with any mesh to make cuts and hollows with intersections. It’s a bit like using the boolean modifier. It’s not great in every situation as it can make a mess of your topology if you are not neat with your cuts and you can be left with some very tiny faces or triangles. But if you line things up nicely and merge your close vector points it’s a pretty simple and handy tool in the kit.

Blender: Simple Dishes Plates and Cups

One game currently in the pipe line is called The Gap. The game is about the gap between the private internal world of personal histories and how they feed and colour perceptions of the present. Esoteric I know right. But basically it’s an exploring game in an urban setting with lots of indoor areas that need indoor props. So I’ve been doing lots of modelling and researching new methods to make life easier when using Blender (which I love).

Part of this research is reading lots and lots of books and one of them was Blender for Animation and Film-Based Production by Michelangelo Manriqu. The book is great and I’d recommend it to anyone using Blender. One method that I had not seen in any other tutorials or books was this one for making complex cone or cylinder based meshes using Bezier circles and curves. Instead of starting with the basic forms and working complex changes on their geometry this method is quick and simple to make complex (and simple) shapes really fast.

It starts like this: Open a new Scene and add a Bezier Curve and a Bezier Circle:

This is the right view (num 3) with the Bezier Curve moved over a bit on the X axis and rotated 90 degrees on the Z. (R key, −90 , Enter).

A Bezier Circle (in orange) and the black line next to it is a Bezier Curve.

Select the Bezier Circle and Click on the little green Bezier Object button from the properties tab on the right.

Open the Geometry and Bevel sections and click on the Object button. Select your Bezier Curve as the Object.

This sets your Bezier Curve as the defining object for the edges and volume of the now filled Bezier Circle.

Set the Bezier Curve as the Object

Now Select the Bezier Curve and move to Edit mode. Subdivide it as much as you want and play with the shape and angle of the curve to get your Bezier Circle to form all sorts of complex shapes.

Once you are happy with what you have you can go back to Object mode and select the Bezier Circle and go to Object -> convert to mesh (or ALT+C Mesh from Curve/Meta/Surf/Text).

Here some examples I did for this post:

Plate
Platter with Bevels
Chalice!

I think the Chalice up there shows the power of this method for creating complex small objects very quickly and the relationship between the Bezier Curve and the resultant mesh.

The Gap will not be released for some time yet – years – it’s still early days but I’m really enjoying the process and it’s nice to be doing something personal and meaningful with game development.

One Hundred Arms

Still going with the Anatomy Studies.
I finished a hundred arms this week.
Next I’m moving on to figure studies and poses.

A Hundred Arms

One Hundred Hands

The last couple of weeks I’ve been drawing hands. I always feel like I need to draw better and never make the time to do it. I have always liked drawing hands but my output is often pretty sketchy (pun intended). I have a book by George Bridgeman called A Hundred Hands which is one of the few art books I have a physical copy of and I’ve tried before to draw them all at one point or another. Thing is though even though there is more than hundred hand drawings in the book they are often not clear with the detail and when you are copying them you get to the point where the information you want is missing.

The challenge I gave myself was to draw a hundred hands myself. I often see people on the internet showing off their hundred heads or hundred expressions etc. and thought it was a good thing to do. What’s funny is that in doing it I reckon those people on the internet must have drawn way more than a hundred of what they were drawing to look so good. After drawing a hundred hands, and you can see from the images above, some of them are really not looking that great. A different angle or just a bad day at the easel is all it takes to throw the drawing out.

But that said it was really good to go over all the anatomy and learn to see again the angles and shapes of the hand. After doing it I really can’t say that I’m that much better at drawing hands or drawing in general but I’m definitely better at looking at hands and seeing all the angles and components.

I often struggle with the disconnect between drawing from life and drawing templates or symbols for things that live in my head. I find drawing from life pretty easy. But drawing from memory where you are using symbols and learned shapes is very different and that’s one of the things I wanted to get better at. So often I find that drawing is a mixture of the two ways of doing it and one definitely helps the other.

Another thing I found was that it was much more fun and better for the work to be looking at a variety of sources. While I started with George I quickly moved on to every other conceivable source of hands I could think of. I used images from online, copying from life, pinterest, every anatomy source I owned, old comics and old masters.

Anyway I’ve moved on to a hundred forearms and plan to work my way around the body for a few more weeks till I get sick of it and need to go back to doing all the other stuff that I love like making games, and 3D models and music.

Zulu out.