Blender 2D Animation with Meshes

This is a follow on from the workflow discussed in the previous post: Preparing 2D Art for Animation.

This is the end result of the process described:

Sprightly Spring Deer

I’m looking to see if there are any advantages to using Blender as a 2D Animation tool using meshes over Unity’s Spline Sprite based animation system. The differences between them at the effort and usability/flexibility layer are many and subtle. Hence the investigation. The two biggest differences for me is that 1. With the Blender animation option you are animating in Blender (which I like much more than animating in Unity). But the down side is that you have to import the animations into Unity and it’s pretty hard to modify once they are there. Which also means that it’s harder to adjust them to react to other actors, objects, and scene elements once you get it into the game. 2. With the Blender approach it’s a mesh in Unity not a Sprite so you can do all the transforms that mesh’s support. You can also light it as a mesh (the default Sprite Renderer cannot be lit). Being able to use light effects on a 2D image within the game is pretty huge for making it look pretty and making effects or plot devices (think lightning on a dark and stormy night). You can get light effects on Sprites in Unity if you swap out the default shader with another shared and with the Light Weight Render Pipeline in Unity (LWRP) but not every project will suit that. There are also Unity solutions that use custom shaders or use a similar mesh and material based solution (see further below for more on that).

Comparing Unity Sprites to Blender Meshes in Unity

The images directly below are taken from the Game Screen in Unity. The one on the left is a Sprite based Spline rendering while the one on the right is the Mesh based fbx from Blender. You can see the difference in quality between the Sprite on the left and the lossy baked images of the Mesh on the right – it’s not huge and can be improved with some tweaking (Bilinear Filter mode and upping the Ansio Level to 2 helped with the anti-aliasing and working with the material Metallic and Smoothness parameters also helped).

Sprite (left) and Mesh (right)
Night Time lighting affects the Blender mesh image but not the Sprite based image.
Lighting effects can be much more complex and creatively arranged to hit separate parts of the mesh.

As stated above you can drop an image onto an object in Unity as a material but it doesn’t light as well and is prone to shadowing. Use the Cutout and not the Transparent Rendering Mode in Unity or you get this shadow on the transparency. The below image shows a material with a standard shader with an image on a Unity 2D plane mesh but there is a shaded square around the outside that marks the image boundary.

Transparency Shader

The image below is the same sprite using a material with a standard shader and a cutout rendering mode (the diffuse sprite shader worked similarly). The top one is a normal sprite renderer with the custom material replacing the default-sprite material. The bottom one is a Unity 2D Plane with the custom material applied. Both tests look better than the quality of the Blender imported model and could be layered and they react with lighting in game.

So these are the alternatives to the process I’m describing below with Blender and they are good and valid options. I guess the only reason why I would choose to use the Blender animation workflow is because I hate doing this process in Unity’s Animator window. Add Property | drill down through the object | the child | the other child | the bone | the transform | and finally the tiny little plus sign that let’s me add one manipulation point! For a Deer Kick I had 88 different animation points – that’s a LOT of stupid clicking down through an object hierarchy to add Properties (I know you can hold down shift and add more than one property at a time but you still have to manually expand them all). The other alternative is to right click and add all properties for an object and then if you are patient enough you can remove the one’s you don’t use.

I do like the record feature that adds properties dynamically but these problems and that I find the interface finicky and too small made me look at Blender.

Importing the Images to Blender and Setting up the Workspace

Moving on to working in Blender with images and Meshes the basic process is this:

  1. For every layer in the artwork of our animated character we exported a separate image file on a transparency. Each png file is imported into Blender as an empty image object (Add | Empty | Image) you could use a reference or background image but since all the parts might move I wanted to group them all under empties.
  2. A Mesh is created for each image and either shaped to the outline of the image or left as a plane and weighted correctly (more on that later).
  3. The image is baked into the UV of the mesh.
  4. The components are then parented to an Armature with automatic weights.
  5. The meshes are weight painted to correct the deforms.
  6. Now it’s ready for animation.

The image objects are all placed at the same origin (0, 0, 0) and rotated 90 degrees on the ‘x’ Axis so they are visible in the viewport from the “front” view.

All the Deer components Frankenstein’d together into a whole
The visibility of parts are toggled on and off so individual pieces can be worked on.

Making the Mesh’s

For each piece a mesh is made. I took two approaches here: 1. Model a plane mesh as closely as I could to the shape of the sprite. 1. Use a plain rectangular mesh and use weight painting to deform correctly.

To start with the modelling approach I started with an image and dragged a plane in edit mode over it as a wireframe. The origin of the plane was kept at 0, 0, 0 so all the pieces that were made had a common reference (same as all the images). Using basic mesh deform and subdivision I created a mesh that matched the image.

The foreleg Mesh

The method was a lot of work manually placing each vertex on the border of the image boundary. If the vertex is placed a little bit outside the image you get a white space on the final product and if you don’t come all the way to the edge you lose some of the black line and smooth finish (UV mapping is slightly out). Plus I found that if you have to warp the mesh too much for a sharp angle or awkward placing of the square tiling you get some minor defects along the line during animation.

Vertices placement

After about the fourth component I got a bit sick of manually moving around vertexes. So I took another approach of just using a rectangular mesh and relying on the transparency of the image to do all the work. This is much easier and faster but there were gotcha’s during adding the armature and weight painting. This rear leg below is just one big mesh subdivided into enough squares to give a decent deform without stretching or warping the black line during animation.

Venison

In Solid shading here is a comparison of the rear leg mesh and the front leg mesh.

Solid Mesh Planes

The image below is both meshes in Render mode (including the armature) and you really can’t tell the difference between them.

Rendered Meshes

The whole mesh ended up looking like this:

Armature and Weight Painting

As you can see above the armature was added and the Mesh objects were parented to it with automatic wights. Because everything is a flat plane of which some are meant to overlap the others (like the closest front leg is in front of the torso and the back leg is behind it) parenting the armature with automatic weights meant that both front, middle, and rear mesh’s would get an equal measure of weight in parts. This all had to be manually painted.

Here the Torso was weighted across three bones and only the rear was affecting the rump (any leg meshes had to be removed from these vertex groups).

Weights had to be carefully graded otherwise warping of the line would result:

The weight is too strong a transition here.
It causes artifacts like this.
This is the resulting gradient changes in weight to get a correctly deforming line.

The other problem was that random single or lone groups of vertices would be weighted to a bone and not visible until you moved it in pose mode:
A few vertices on the chest were registered to the root bone. These all have to be manually removed.

The other interesting anomaly with the large rectangular plane meshes was that the weight would sometimes cause improper warping of the mesh which bent it round itself in places and showed up as black squares.

The foot vertex group covers all these vertices.
Which you cannot tell in edit mode when you select it with “show weights”.
During transform in animation these black marks show where the mesh does not warp properly.
The mesh is a mess.
It’s because the shin bone weight doesn’t go all the way to the edge.
It looks right in edit mode.
But if you use the vertex group to select all the vertices it should look like this (all the way to the edge).

These are pretty quick things to fix really but it took a while to work out what exactly was happening. It’s was still faster than individually making all the mesh components by hand to fit the image.

Probably a better workflow would be to make reduced simpler meshes that fit closer to the image but don’t have to slavishly man handle the vertices around the borders.

The Shading

UV Mapping is totally easy here but getting the material right was a bit tricky with the transparencies and images. This is the setup I used:

The Transparent Shader in Blender

That’s about it for getting everything set up in Blender. For more info on the animation steps and getting it into Unity see my other post about this. https://www.zuluonezero.net/2021/11/16/exporting-multiple-animations-from-blender-to-unity/

Preparing 2D Art for Animation

I’ve been doing some work on the 2D side of things in preparation for another game.

This has been the general workflow.

1. Make the assets in Clip Studio.

2. Pack the sprites with Free-Tex-Packer

3. Import the art into Blender, make a mesh for each sprite and UV map it.

4. Add the Armature bones.

5. Weight Paint

6. Animate.

7. Export from Blender as an *.fbx with the animations baked.

8. Import into Unity

9. Add new Materials and import the UV images into Unity.

10. Add the *.fbx imported asset into a scene.

11. Add an Animator Component and drag the animations from the prefab into it.

12. Set up Triggers and connections for the animations.

It’s a lot of work. Especially if you make a custom mesh for each piece of art. But I did all this as I really like animating in Blender (especially now that the Pose Library is functional and part of the Asset Browser). But to tell the truth I think I got better results using the Spline system in Unity 3D with much less work. There are trade offs and I’ll go through them below after more exposition on the workflow.

In this post I’ll go through the asset creation process in Clip Studio.

Making the Asset

The 2D game has a bunch of cute animals so I dug deep into the Disney Sketchbook by Ken Shue and pulled out Bambi for inspiration.

An early Disney sketch

Using this as a rough guide I drafted a few basic shapes for a “Deer” character which looked like this:

Rough Sketch for the 2D Asset

I started using Clip Studio last year in place of the Gimp. I’ve tried all sorts of painting programs and would choose Gimp over most of them (I will not spring for a paid version of Photoshop – it’s extortion!) but Clip Studio won me over with it’s brushes. It’s not expensive by comparison and I really like how it fits into the specific things I want out of an art program. I’ll often go back to Gimp for projects that require a lot of filters and image manipulation but for straight drawing on the PC Clip Studio is a good fit for me. I like how you can make custom tools that mimic your real life counterparts for a pencil or brush and find this program better at it than most (though Adobe Sketchbook runs a close second).

To start with I create a set of layers for the Inking of the artwork. One for each moving element in the final asset.

There is a pretty simple formula for this where each limb or piece gets a layer. But you have to have a general idea of what you are going to need in the final asset and what animation is required. There is no point doing a separate component if it’s not going to move or be seen in the final product. Trouble is a lot of this work is iterative and often you find that you have to go back and change something when it doesn’t look right. There needs to be an awareness of where pieces overlap and what lines are going to be warped by the armature bending or where a line needs to be extended behind a piece that might move and reveal where it ends by another layer.

The Inking Layers
This is how the inking layers sit on top of each other that shows where lines overlap or extend.

It’s really easy to see on the body and legs but even here on the pieces surrounding the head the lines that make up the ears and hair and neck all have to move independently but still look connected.

Once I’m done with the inking stage I add more layers for color. At this point the whole file gets saved as export copy and the layers are merged into one for each piece again and numbered in the order in which they will sit on the animation cell. I keep the older copy with the separate layers for everything and all the drafts so I can go back to it if I have to change something.
This is the whole asset complete and ready for export. Each layer is exported individually as a *.png. The file size of each one is 1024 x 1024 pixels with 600 dpi and a transparent background.
The *.png files are imported into the Texture Packer to minimize the material size in the final project. Each of these elements get’s UV mapped to a mesh in Blender but more of that later in the next post.

Start to finish this took a couple of days elapsed time as there is a lot of noodling about with formats, designs and what-not.

Next up I’ll go into the Blender workflow and preparing the art for animation with complex and simple meshes.

Endless Elevator – Last Week of Beta Testing

Hi Harmony here,

I’d like to thank the wonderful Beta Testers who have signed up for our Open Beta of Endless Elevator over the last month. I was really surprised how many people responded to our call and very grateful to all those who provided feedback.

The Open Beta is still running for another week so there is still time to try the pre-release game: https://play.google.com/apps/testing/com.ZuluOneZero.EndlessElevator

Then head over to our Beta Landing Page if you want to give us some feedback.

http://www.zuluonezero.net/endless-elevator-beta/

Your feedback helps us make a better game – thank you!

One feature of our Beta Testing was opening multiple channels to provide users to get back to us. Not only did we use the Google Play Beta Feedback and Stars system but we also took feedback on email and through social media. One new thing we did was to incorporate a chat channel on our Beta Landing Page where users could provide anonymous feedback directly on to our web site. I feel that the anonymity of this format was a really important source of quality comments for us.

Our Chat Page

Happy Playing – Harmony Out.

Getting a Foot in the Door of Game Design

First of all – sorry about the misleading title – this post is about getting the doors working in the Endless Elevator game that we are currently developing. I thought it was a good pun and that as this post is all about our development process that it wasn’t too bad. The only career advice I got is just to start making games…any games.

You’d think making a door that opens and closes would be a pretty simple thing to do but surprisingly it wasn’t.

I designed and built some 3D models of building components in MagicaVoxel and exported them as .obj files. I use MagicaVoxel because it’s free and really quick and simple to use. When I model components I can be sure that they are all exactly the same scale and that all the different bits (no matter when I made them) are going to fit together without any hassle. But the models are not optimised for game engines as they have a reasonably high poly count due to the modelling process within the program. Most of the models come out with heaps of unnecessary triangles that need to be managed first. In most cases I will import the object into Blender and use the “decimate” modifier on the planes (with an angle of about 20 if you are interested). In the case of the door object it was pretty simple, it is just a door after all, and I didn’t need to optimise it.

Here is what the door looks like in MagicaVoxel:

Notice that the door object is sitting just forward of the enclosing frame and that when exporting the object the center is at the middle bottom plane of that frame. The door is off center because it’s modelled to fit exactly in a doorway that sits exactly in that position within the frame. This saves me a huge amount of time lining everything up when it gets to Unity as the position of the door (or any other object for that matter) is already in the right spot. The problem is that the point of origin for the door is in the wrong spot. It exports a few units behind the door and on the floor! This becomes important when you try and rotate that object (like you would when opening and closing a door) and the pivot point is not where the hinges should be.

To fix this I had to import the .obj file into Blender and reposition the point of origin for the model.

This is what it looks like in Blender when I did this:

To do it I selected the edge that I wanted the door to pivot on when opened.

Then in Edit Mode:
Mesh -> Snap -> Curser to Selected

In Object Mode:
Object -> Transform -> Origin to 3D Curser

So that puts the curser onto the correct position in the middle of that edge where the hinges would be and resets the point of origin (which is where it will pivot when rotated in Unity) to the right spot.

Once we were all imported into Unity and looking good I set up a prefab for the “Doorway” object with the door as a child object. The doorway object has a bunch of colliders to stop the player walking through the walls and a big sphere collider where the door would be to trigger the open / close function when the player walks into it.

This is what the doorway looks like in Unity:

Next I scripted up a few options for opening the door. I’ll post the script at the end of this article but basically there were three options of opening the door that I wanted to test. (Actually I tried it about six ways but whittled it down to the most simple methods – and just as an aside there is an actual “hinge” object type in Unity if you ever need it).

This is how the script looks in the editor:

Notice the slider at the bottom to control how far I want the door to open. It’s really handy to have this when playing in the editor and getting your settings right. If you want to know more about using it see this post

The three tick boxes are for testing the three different ways of opening the door.

Snappy was a quick simple transform of the position from open to closed with no “in betweening”. It looks a bit unnatural as the door magically goes from closed to open but it’s not too bad and most people are pretty used to similar behaviour in video games.

The active line in the code is:
the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);

The next method works more like a door should. In that it swings open and closed the whole way in a steady fashion. But the big problem with this method was that it was OK when the character was going into the doorway and with the swing of the door but when it was time to come back out of the doorway the door was in the way. There was not enough room to trigger the door opening from the other side without being hit by the door as it opened. Plus if the player enters the collider from the wrong trajectory the character gets pushed through a wall by the swinging door which is sub-optimal. I called this method InTheWay!

The active line here is:
the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);

In an effort to combat this I chose to do a hybrid method that opened the door to a point that wouldn’t hit the player and then do the magic transform to get all the way open. I call this one aBitBoth. It looks a little weird too. Like there is an angry fairy pulling the door closed with a snap after the character enters.

Here are all three to compare.

Snappy

In The Way

A Bit of Both

I’m not too sure which one I’m going to use at this stage as the Snappy method works best for now but I like the In The Way method better. I looks more normal and I like that you have to wait just a few milliseconds for the door to swing (adds tension when you are in a hurry to escape a bullet in the back). I could do something like halt the player movement from the rear of the door when it triggers to open from the closed side or maybe play around with the radius of the sphere. Neither solutions seem like great ideas to me right now but something like that will need to be done if I’m going to use that method. Maybe I could just have the door swing both ways and open to the outside when he is behind it but that’s probably a bit weird for a hotel door.

Here is that script that I was testing with:

using UnityEngine;

public class OpenDoor : MonoBehaviour {

    public bool openMe;
    public GameObject the_door;
    public bool snappy;
    public bool inTheWay;
    public bool aBitBoth;
    public Vector3 targetPositionOpen;
    public Vector3 targetPositionClosed;

    [Range(0F, 180F)]
    public float turningOpen;

    void Start ()
    {
        targetPositionClosed = new Vector3(0f, 180f, 0f);
        targetPositionOpen = new Vector3(0f, turningOpen, 0f);
    }

    void Update()
    {

        if (openMe)
        {
            OpenMe();
        }
        else
        {
            CloseMe();
        }

    }

    private void OpenMe()
    {

        if (inTheWay)
        {
            if (the_door.transform.rotation.eulerAngles.y > turningOpen)
            {
                the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
            }
        }

        if (snappy)
        {
            the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
        }

        if (aBitBoth)
        {
            if (the_door.transform.rotation.eulerAngles.y > turningOpen)  // 144f
            {
                the_door.transform.Rotate(-Vector3.up * 90 * Time.deltaTime);
            }
            else
            {
                the_door.transform.rotation = Quaternion.Euler(targetPositionOpen);
            }

        }

    }

    private void CloseMe()
    {
        if (inTheWay)
        {
            if (the_door.transform.rotation.eulerAngles.y <= 180)
            {
                the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
            }
        }

        if (snappy)
        {
            the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
        }

        if (aBitBoth)
        {
            if (the_door.transform.rotation.eulerAngles.y <= turningOpen)  // 144f
            {
                the_door.transform.Rotate(Vector3.up * 90 * Time.deltaTime);
            }
            else
            {
                the_door.transform.rotation = Quaternion.Euler(targetPositionClosed);
            }
        }
    }

        void OnTriggerEnter(Collider col)
    {
        string colName = col.gameObject.name;
        Debug.Log("Triggered OpenDoor!!! : " + colName);

        if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
        {
            openMe = true;
        }
    }

    void OnTriggerExit(Collider col)
    {
        if (col.gameObject.name == "chr_spy2Paintedv2" || col.gameObject.name == "BadSpy_Package(Clone)") 
        {
            openMe = false;
        }
    }

}

Why Normalize()

I’ve been doing some work on the AI for enemy behaviours for an unreleased game Endless Elevator and have been delving into the book “Unity 2018 Artificial Intelligence Cookbook – Second Edition” by Jorge Palacios.

It uses the Normalize() function regularly to record the direction of an object in relation to another object and it got me thinking about the usefulness of this function.  You can see why something like knowing the direction of the Player could be good for an enemy AI behaviour but I wanted to investigate more deeply about how this worked and how I could use it.

One of the things I hadn’t consciously been aware of, but is obvious once you point it out, is that a Vector3 can define a location (ie. a point in space 0, 0, 0) but it can also define a direction if you have a starting position and a target position.

A good example of a Vector being used to denote a location and a direction in Unity is the Ray.
The Ray consists of two Vector3 data points. The first Vector3 is the source position the ray is taken from and the second Vector3 is the direction of the target.

When a direction Vector3 is Normalized it keeps it’s direction but it’s length (how far away one object is from another) is set to a point between 0 and 1.  In Unity they round down to 0 if the objects are very close to each other (anything under about 0.04f of a unit) anything above that gets Normalized up to 1.

As an example we can simplify a bit by limiting ourselves to a Vector2.

A Vector2 can also be either a point or a direction depending on what you want to use it for.
If it’s (3, 0) then it could be either a point at x=3, y=0, or a direction along the x axis (one dimension) with a length of 3 and a slope of 0.
If you Normalize() that example the Vector2 becomes (1, 0). It will have a length of 1 but still be pointing in the x direction.
If you add the y value (dimension) into the picture where x=3, y=3 the Normalized value becomes (0.7, 0.7). So without going into the maths of why … you can see that the axis/dimensions impact on each other to define the Normalized direction.

We can extrapolate this into three dimensions without too much difficulty but I’m not going to describe that here as it’s not that necessary to understand. We only need to grasp what this means and what sort of use you can make of it in a Unity project.

For Example…

This is a simple use case of why you might want to use Normalize().

(Note that there is also a Normalized() function where the current vector is left unchanged and a new normalized vector is returned).

I’ve created an example project where the relationship between two objects (a Cube and a Sphere) will give us a direction (using Normalize()).  The direction is applied to a transform.Translate function on a third object (a Cylinder) which now moves in the given direction.

In this script below attached to our Green Cube (the unchanging point of origin – 0, 0, 0) we define the Pink Sphere as the target.  In the script we get the difference between the position of the sphere and the Cube and Normalize() it to make a direction. It doesn’t matter how far the Sphere is from the Cube the direction stays the same.

Also to extend the example I’ve added a Ray with the same origin as the Cube (0, 0, 0) that will always point to the Sphere.  This shows that the Normalized Vector3 is the same as the direction of the Ray pointing to the Sphere.

Normalizing Direction

The public Vector3 lineDirection is the difference between the position of the Cube and the Sphere. This is the variable we will pass to the Cylinder to move it in the required direction.

using UnityEngine;
using UnityEngine.UI;

public class HowToNormalizeDirection : MonoBehaviour {

    public Vector3 lineDirection; 
    public GameObject target;   
    public Text vector_text;
    public Text norm_text;
    public Text ray_direction;
    public Ray whatsTheRay;    

    // Use this for initialization
    void Start () {
        lineDirection = new Vector3();
        whatsTheRay = new Ray();
	}
	
	// Update is called once per frame
	void Update () {
        lineDirection = target.transform.position - transform.position;  
        vector_text.text = "Vector3 target.transform.position :" + lineDirection;
        lineDirection.Normalize();  
        norm_text.text = "Vector3 Normalized() :" + lineDirection;
        whatsTheRay = new Ray(transform.position, target.transform.position);
        ray_direction.text = "Ray : " + whatsTheRay.ToString();   
    }
}

Moving the Capsule in the Same Direction

In this script below attached to the Cylinder we get the direction from the lineDirection variable above but we could also have used the inbuilt Ray.direction function to return the same result.

using UnityEngine;

public class MoveCapsule : MonoBehaviour {

    public Vector3 direction;
    public float speed;
    // Use this for initialization
    void Start () {
        direction = Vector3.zero;
        speed = 0.50f;
	}
	
	// Update is called once per frame
	void Update () {
        var script = GameObject.FindWithTag("Cube").GetComponent();   
        direction = script.lineDirection;
        //direction = script.whatsTheRay.direction;  
        transform.Translate(direction * speed * Time.deltaTime); 
	}
} 

In the Video below you can see this demonstrated. The position of the Pink Sphere is being manually manipulated using the Transform on the right. The text areas expose the vector3 values being accessed by the objects and scripts.

So what’s going on in this video amateur hour?

The three lines of text at the top of the game scene represent:

1. The Vector3 Position of the Pink Sphere.

2. The Vector3 Normalized  direction which is the relationship between the position of the Green Cube (0, 0, 0) and the Pink Sphere (x, y, z).

3. The uses of Vector3 in the Ray.  First the position of the origin and then the Direction of the Ray which is exactly analogous to the Normalized Vector3 in point 2 above.

The uses for something like this could extend to a weapon aiming system, a custom controller or be passed into a path finding routine. This is not the only option of course there are other built in methods like transform.LookAt(target) available which may accomplish your programming goal.

During the research for this post I found the following links helpful.

https://docs.unity3d.com/ScriptReference/Vector3.Normalize.html
https://docs.unity3d.com/ScriptReference/Vector3-normalized.html (which is a different but related function)
https://forum.unity.com/threads/what-is-vector3-normalize.164135/
https://answers.unity.com/questions/52881/vector-normalization-question.html
https://www.dummies.com/education/math/calculus/finding-the-unit-vector-of-a-vector/ (for why you really don’t want to know about the math or do it by hand)
https://www.mathsisfun.com/algebra/vector-unit.html (best for simple vector explanation)
https://docs.unity3d.com/ScriptReference/Ray.html
https://docs.unity3d.com/ScriptReference/Ray-ctor.html

Indie Game Release Click Through and Conversion Rates

Hi Harmony here…

Last Sunday we released our game The Dog Run into the Google Play Store.

This post is a breakdown of the first week release click through rates from this web site and the conversion rates that resulted in an actual download of a game.

Let me say from the outset that without some form of “advertising” or publication campaign a new game on the Google Play Store is never going to be successful.  We found this out the first time around with our game NumBlocks:

NumBlocks is a fun little numbers game that’s a bit like Tetris with numbers that add together. It was our first game with Unity and only a bit more than an experiment in finishing a game that looked good enough to publish.  We weren’t that proud of it and were really just proving our development and release cycle for future games.  It was never publicised and never got downloaded…by anyone.  So if you are not promoting your game… no-one is looking … and no-one will play it.

This time around with The Dog Run we really do want people to play the game and enjoy it. The game is a simple endless runner with a quirky hand drawn style that held many interesting development challenges. More information about the game is available here:

The Dog Run

But what I want to investigate and be transparent about is the way we used this web site and the landing page as a way to interest people into playing the game.

We released the game into the Production Google Play Store on Sunday October 21st.

On the next day at about 8 am New York time we posted on social media channels (we found from research, and validated with our own posts over several months on this site, that this is one of the times of the week that got the most traffic).

We only used Reddit, Facebook and Google+ to promote the release of our game.  We Tweeted and posted to LinkedIn and Tumblr but no-one really follows our feeds on those platforms so I won’t count them.

This is the list of Groups that we hit on social:

Facebook

IGC : Indie Game Creators
Unity 3D Game Developers
Unity 3D Game Developers (different group)
UNITY3D Game Developers
GameDev Show and Test
Game Developers
Indie Game Players & Developers!
INDIE GAMES
Indie Game Promo
Indie Game Development Feedback (IGDF)
Indie Game Chat
Indie Game Promotions

Google+

Unity3d Indie game developers
Unity3D Mobile Developers
Unity 3D Enthusiasts
Unity 3D Developers
Android Apps and Games-Android Mobile Zone
Android Game Developer
Free Mobile Games
Game Developers
Game Developers (different group)
Mobile Game & App Developers
Unity
Unity3d Indie game developers
Unity 3D Developers
Unity3D Mobile Developers

Reddit

/r/Games/
/r/gamedev/
/r/androidapps/
/r/AndroidGaming/
/r/gamernews/
/r/IndieGaming/
/r/androiddev/
/r/gamedesign/
/r/devblogs/
/r/SideProject/
/r/playmygame/
/r/Unity2D/
/r/IndieDev/
/r/indiegames/

So what can we say about this set of groups?  Well they are all Gamer’s Groups. These are “our” people. This is where we go when we need feedback or inspiration or help.  We may not be active in all of them and by no means is this an exhaustive list but after working our way through this lot on a Monday morning there is no energy or time left to look for more groups to join and contribute to.  So this is not a “general public” group nor are they representative of what I think our target audience might be.  But these are gaming enthusiasts and I think more likely to try new games and provide that sort of validating critical and informed feedback that is important for making the game better.  (I love you all!)

So now for the stats….

After seven days since posting these are the figures that hit our website (which normally gets about 20 – 30 visitors a day).  The graph below shows the two weeks previous to release and is indicative of the sort of traffic we get. The blue bar just under the 500 mark is what happens when we do a blog post.  The big orange one is the day after our “social media campaign” (if you could call it that).

Total number of visitors over that week was: 1199

But as you can see a day or two after we posted the traffic dropped straight back to normal. Although the traffic we did get that week was all mostly to look at the landing page of the game.

It is interesting to see where that traffic came from  –  and it’s overwhelmingly Reddit that drives the traffic to our site.  This image below is the stats from the big orange day.

So as you can see even though we posted to heaps of groups in Google+ and Facebook not many users of those platforms were reached.

Out of this massive spike in traffic the number of people who actually clicked on the link to go to the Google Play Store was: 76

This is about 6.5% (rounded up) of our total traffic during that period.  Every day during that week the click through rate was about the same average rate.

Now for the fun part.  Out of those 76 players the number who actually downloaded the game and played it was:  21!

Well that’s a huge improvement over none.

I was very excited by this number.  I’ll type it again in long form…..  Twenty One !

So what I want to drive home with this post is the amount of traffic that you got to drive into your web site to get the sort of volumes that will get your game downloaded.

Let’s break it down into simple numbers……  Out of 1200 people only 75 looked at the game and out of that number only 21 downloaded it.  That’s 1.75% of the traffic to my landing page downloaded the game. That’s the truth of it and remember that these are “our peeps” not the general public so I couldn’t even say that this is indicative of the way the real world works.  But it sure is interesting and in a few months after I’ve spent some time marketing this game to real people (not just the gaming community) I’ll do another post and see if the stats still hold.

Harmony out!

P.S. If you want a friendly copy of all those social media links email me at zuluonezero.z10@gmail.com and I’ll forward them back. (Maybe on a later post – if there is enough interest – I’ll  put them online). Zulu.

The Dog Run is in Production on the Google Play Store

This week we moved our latest game The Dog Run into Production on the Google Play Store.

The Dog Run is an Endless Runner for Android that supports animal welfare!

It’s a free game. There is the option to watch ads but instead of in game rewards all profits from the advertising goes to support animal welfare and animal hospitals.

The game is about taking your fun lovin’ pooch for a run. But watch out! There is a bunch of obstacles in your path. It’s a good thing your dog is a natural jumper and can run all day in all sorts of weather.

Read the review from Daikon Media here.

They say, “… it’s not only the style, that I like, it’s also the unique sense of humor…the game is even fun and original.”

Feedback can be posted on the Google Play Store,  on this website in the comments,  or directly by email.

ZuluOneZero Game Design
http://www.zuluonezero.net/
zuluonezero.z10@gmail.com

Unity Debugging with ADB for Android

Hi Zulu here… (First of all … sorry for the cat)

Let me say straight off that your first port of call for any Unity debugging should be the Unity Console.

Though sometimes you need more low level operating system logging for Android. This is where ADB (in lower case) comes in.

On Windows this is a command line tool to view the logs from a connected Android device.

The command line is not the only way to use the tool sometimes it’s better to use the Android Studio interface (a bit more graphical).

You will need to have your Android device connected to your workstation and USB debugging turned on  (Google that if you need to). You could also use an Android emulator on your desktop.

I use Leapdroid or KoPlayer.  (Leapdroid have now joined Google and no longer support the emulator but it’s still available to download on the internet).  I guess you could also use the emulator that comes with Android Studio.

When your game is installed and running on your device go to the directory in your workstation (PC) where the Android SDK Tools are.

On mine they are here:

C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools

In the tools directory open the monitor.exe (this tool was depreciated in Android Studio 3.0 and replaced with profiler.exe mine is still on the lower revision).

This documentation on the Android site is a good start investigating the profiler:

https://developer.android.com/studio/profile/android-profiler

You can also get into LogCat directly from Android Studio (if you have it open):

Go to  View | Tool Windows | Android Monitor

At the bottom of Android Monitor in it’s own tab is the LogCat console window. This contains all of the information about what’s happening in the Android operating system.

As you can see the LogCat console contains a lot. It logs everything.

To filter it type “tag:Unity” in the textbox at the top to see messages that relate to Unity.

Using adb logcat from the command line

Open a command prompt on your development workstation and find the location of your Android SDK platform-tools folder.

Mine was here:

/Users/YourUserName/Library/Android/sdk/platform-tools

If you get this error when you run adb.exe using the command prompt:
‘adb’ is not recognized as an internal or external command operable program or batch file

You can add ‘adb’ to the $PATH environment variable (and restart the command prompt).

setx PATH "%PATH%;C:\Users\AppData\Local\Android\sdk\platform-tools"

 

To run logcat through the adb shell, the general usage is:

[adb] logcat [<option>] … [<filter-spec>] ..

This is the official Android Developer Logcat Command-Line Tool documentation:

https://developer.android.com/studio/command-line/logcat

but you can get –help on the command line.

It can be handy to know the device id of your Android phone/tablet whatever. This command will help:


C:\Users\<user_name>>C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools\adb.exe devices
List of devices attached
ce10171a5c19853003 device

 

You can specify that the log spew into a file instead of into your console (the console is pretty much useless as there is too much to scroll through).


C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools\adb.exe -d logcat > adb_logcat_out.txt
-d use USB device (error if multiple devices connected)

logcat show device log (logcat --help for more)
-s SERIAL use device with given serial (overrides $ANDROID_SERIAL)

 

The default Log Location on my machine was:
C:\Users\<user_name>\AppData\Local\Temp\adb.log

A few seconds of output got me a 6.5 MB file so a bit of filtering is advisable

If you run into trouble with the adb server just kill it and restart.


C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools\adb.exe kill-server

C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools\adb.exe logcat -s ce10171a5c19853003 DEBUG

C:\Users\<user_name>>C:\Users\<user_name>\AppData\Local\Android\sdk\platform-tools\adb.exe logcat -s ce10171a5c19853003 DEBUG
* daemon not running; starting now at tcp:5037
* daemon started successfully
--------- beginning of main
--------- beginning of system

 

If you want further help check out these pages from the Unity Manual and Tutorials:

https://docs.unity3d.com/Manual/TroubleShootingAndroid.html

https://docs.unity3d.com/Manual/LogFiles.html

https://unity3d.com/learn/tutorials/topics/mobile-touch/building-your-unity-game-android-device-testing

As a final word I’ll also direct you to this package called Device Console on the Unity Asset Store. I’ve not used it but it looks really good and for fifteen dollars might save you a lot of hassle.

https://assetstore.unity.com/packages/tools/utilities/device-console-44935

Endless Elevator Mechanics

Howdy. Xander here…

This is a quick demo of the basic play mechanics from our new game in development Endless Elevator.  We got the basic movement working a while ago (see our Smooth Moves post) and now that The Dog Run is in BetaTesting we can spend some more time working on this game.

(If you want to help with Beta Testing and be an early adopter of The Dog Run you can sign up here: https://play.google.com/apps/testing/com.ZuluOneZero.TheDogRun)

This clip below for Endless Elevator shows:

The Good Guy Cop Character movement (oh yeah he goes left and right)

Firing his awesomely powerful dumb dumb gun

Using (the eponymous) Elevator (see if you can spot the camera tracking bug!)

Traversing (the namesake) Escalators

Finally entering a doorway with a Power Cube (I’m not sure what it will look like in the final game yet). When he goes into a doorway with the special block the game flips and he goes into a Super Spy Store (not shown) where new fun weapons and power-ups are available!  Cool.

The Dog Run – Game Play Demo

Hi Harmony here…

Today I did some game play testing of the new game we are beta testing right now.

If you want to do some beta testing you can opt in here:   https://play.google.com/apps/testing/com.ZuluOneZero.TheDogRun

Feedback can be posted on the website in the comments or directly by email at
zuluonezero.z10@gmail.com