Endless Elevator Weapons

The last few weeks I’ve been working up a couple of new weapons for the main protagonist in our upcoming game Endless Elevator.

This is a bit of a “Screen Shot Saturday” demo.

The first bad guy is dispatched with the standard issue firearm (with the big red bullets).

The next two are fried by the new Electricity Gun.

The last huddle are amiably frozen with the non-lethal Freeze Ray or Ice Gun.

I’m going to release a beta/demo for general testing and fun pretty soon so I’ll post again when it’s ready.

Unity Code Review and Style Guide

Hi Trixie here. It’s Code Review time! Yay. Time to clean up that code and start really looking at all the crap work you’ve done over the last months and clean that sh!t up!

I’ll post our in-house Style Guide (which of course goes out the window when you are coding in anger) at the bottom of the post. Hopefully someone else will follow it or find it useful.

This is how the code review works:

  • Make a list of all your scripts and the game objects they are attached to.
  • Go through all the variables/functions/iterators and make them follow the standard in the Style Guide (which includes making logical names sensible to humans).
  • Check all your Public references and make Private if you are really not using them.
  • Trawl the console output and clean up the extraneous debugging guff.
  • Start grouping your code into functions that do a similar thing and try and make them standardised.
  • If you are lucky you will find some optimisations in there as well to make your game run faster, better, more efficient, lighter.

Then at least it will be another few months before you muck it up again.

I know there are automated tools to do some of this work but I prefer to work though my own methods. I like to have complete control over the process. My favourite tool for analysis is the Unix Command line (I know weird right?). My workstation is Windows 10 but I have a Cygwin like utility called MobaXterm installed which allows me to interrogate the file system like a Unix machine and use grep and other commands on all the files that make up my scripts.

Basically I want to build a big spreadsheet of information that lists all my scripts and the Game Objects they attach to and the Public interfaces and variables etc.

This is one that I started for Endless Elevator:

Then I start extracting Game Objects and Code loops using the Unix Command Line so I can start to build a picture of what’s going on.

One of the outcomes of this process is that I want to be left with a map of how stuff works that I can connect the dots on in the future. It’s kind of like a Design Document in reverse.

Here are a few examples of the commands I’m using and how the information is extracted:

# grep bool *.cs # Getting out all my bools – I tend to use true or false tests a lot to explicitly define functions and events.

As you can see above the bool names are mostly descriptive and the variable name is mostly in camelCase. Those few like swingme will get updated to swingMe and the overly generic “yes” and “yesNow” (what was I thinking!) will be made more descriptive of what we are really agreeing to in that code.

This command looks at all the sort of functions I am using and where. It’s nice to know which one’s don’t have an Update() function and where all my Collisions and Triggers are.

# egrep “{|}” AI.cs # I use something like this to collect an idea of the complexity of my loops and functions.

# egrep “{|}(|)|if|for|else” AI.cs # This is my favourite type of command. Like the command above I use it for working out the structure of a script. This is way easier than scrolling through lines and lines of code and comment and makes it really easy to spot areas where you have gone loopy crazy pants.

# grep GameObject *.cs # This one makes a good list of all the scripts that I reference another Game Object in. It helps build a visual map of what dependencies there are between objects.

# grep script *.cs # This one does a similar thing in that it grabs all the times that I am calling something from within another script attached to another Game Object..

The command line is also a nice quick way to check how many lines of code you have written.

There are some good Style Guides out there.

I like this one for it’s organised folder structure and naming convention for files: https://github.com/stillwwater/UnityStyleGuide

I like the Microsoft C# one for layout and block style: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/inside-a-program/coding-conventions

I like this for it’s use of camelCase and Capitals and it’s preferred block style: https://github.com/raywenderlich/c-sharp-style-guide

As a basic guide though I always try and mimic the Unity API manual. For example this page for OnCollisionEnter:

// A grenade
// - instantiates an explosion Prefab when hitting a surface
// - then destroys itself
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour
    public Transform explosionPrefab;
    void OnCollisionEnter(Collision collision)
        ContactPoint contact = collision.contacts[0];
        Quaternion rotation = Quaternion.FromToRotation(Vector3.up, contact.normal);
        Vector3 position = contact.point;
        Instantiate(explosionPrefab, position, rotation);

The comments are appropriate. The Class name uses a Capital for each word. The curly brackets are on a new line and indented block space. Varables are in camelCase.

Here is our minimal Style Guide. This is the basics of how we roll things:

Thanks for reading. See you all in the New Year… Trixie.

Unity: How to Create a Cut-Scene with Cinemachine and Unity Recorder

Hi Xander here,

For Endless Elevator we wanted to do an Introduction Scene for the game. The gameplay, as the name suggests, consists of climbing endless elevators and escalators. The player navigates floor after floor in the bad guys luxury hotel and tries to climb as high as possible while defeating the bad guys. It’s a 2.5D top down scroll mechanism and clipped into the limits of the building. Just dumping the player into the start of the game in the levels of the building felt a little weird as there was no context to where you were in the story. Hence the need for an opening shot to set the scene and to literally drop the player into the game.

Our hero flies into the enemies’ oasis headquarters in a helicopter and storms into the foyer of their luxury hotel. We mocked up a scene and a helicopter in Blender and imported the assets into the main foyer scene of the game. I hadn’t used Unity’s Cinemachine before but wanted to try it out. Previously, in other projects, we had captured gameplay for cut-scenes using external software and video editing suites which was OK but the experience with Cinemachine and Unity Recorder was way smoother. It was much easier to work with and much better quality avi files. Plus we didn’t have to do custom scripts for switching cameras and panning. It was so easy it kind of made me excited about movie making with Unity but you know I don’t need another distraction.

To start working with Cinemachine and Unity Recorder you can download them using the Package Manager. Unity Recorder has only recently been added (it’s still also on the Asset Store) so you need to enable the “Preview Packages” selection from the Advanced menu in the Package Manager.

Cinemachine in the Package Manager

Have a look at the Unity Manual and tutorials for more info about Cinemachine and Unity Recorder.

Below is a screen shot of my scene in Unity. You can see the main building in green and the surrounding buildings and water in the bad guys oasis HQ. The helicopter is just visible down where the camera sight lines join and on the left in the Hierarchy you can see my Timeline component and my two vcams (Virtual Cameras).

The Timeline is where all the magic happens and was very easy to set up.

First we did a few animations on the helicopter to fly it in to the building and make the rotor spin. Then we added an animation to move the character from the helicopter into the building (which looks terrible but remember this is a quick mock up)

The Helicopter Animation

We dragged this animation into a new Animation track on the Timeline object (right click and Add Animation Track). Then we created two Virtual Cameras in the scene. One Camera (vCam1) was set using the properties in the Inspector to automatically Loot At and Follow the helicopter. This means that where ever we flew the Helicopter the camera would follow it round from behind at a set distance and keep it in the frame automatically. This was really fun when we had it under manual control for testing and worked well when under the control of the Animator. We used a preset for a bit of camera jitter and shake to mimic a real camera man in a second helicopter.

The second Camera (vCam2) was stationary at the building site but set to Follow (ie. Look At) the Main Character. We timed the cut from one camera to the other so that once the helicopter landed it would pass control to the second camera and seamlessly start focusing on the Player. This was so easy it was ridiculous. The Camera objects were added to the Timeline and the split where we cut from one camera to the next is clearly visible in the screenshot below (two triangles). The first time I ran it and that view cut automatically from one vcam to the other gave me an enormous sense of satisfaction like I’d just been named a modern day Hitchcock.

The Timeline Editor Window

To record what we had done as an AVI we opened the Recorder Window:

Opening the Recorder Window.

We used the default settings and triggered the Start of the Recording with the start of the animation by having it in the Timeline. The Capture target was the Game View (you can also get the other elements of the Editor if you need it). The Output Resolution is interesting as you can use the Size of the Editor Game window on your screen or set it to standard default movie formats.

The Recorder Window

That’s about it. We hit Play in the Editor and the Timeline starts the Recording of the AVI and synchronises the Animations and the Camera movement automatically. Once we are done we are left with a good quality moving image of our game screen that we will use as the cut-scene to drop the player into the start of our game. Obviously what we got here is just a “screen test” but I was really happy with what we could achieve in just a few hours and with so little complexity.

Xander out…

Unity SVG Screen Test

I been toying with an idea for a new text adventure game. I wanted to finally start using the framework we built earlier in the year (See this post and check out the code for yourself). I also wanted to get into the new-ish support for SVG graphics in Unity. I love Vector Graphics and I think this project is going to be a great way to use their simple clean aesthetic.

I did a “screen test” of a very quick vector image using InkScape. (I have a deep respect for the Linux and Open Source community and want to thank them now for such awesome software).

To get started with SVG files in Unity 2018.x you need to import the package using the Package Manager (it’s called Vector Graphics). This allows you to pick up SVG files when you import a new asset. You can also drag and drop from your explorer into your project hierarchy.

The SVG image shown below is the three receding black squares. The SVG sprite is on a separate canvas to my text (and sorted behind it). I’ve added a couple of raw images in the same color as my background and an animator component to move them around to give the “drawing” effect.

I have to say I’m pretty happy with the resulting mock up and think this will be the way forward in developing this project. Resizing the screen view into different formats works as expected with the Vector image staying crisp (which is more than you can say for the poor four color buttons below it that use a scaled sprite – very fuzzy).

Zulu out.

Unity Audio vs Wwise

To start with I wanted to do a general investigation into Wwise the integrated audio package for Unity by AudioKinetic. When I started working through it I figured it would be more interesting to look at Wwise in comparison to Unity’s own audio API and mixer components which have been around since Unity 5.

To do that I’m going to compare a game in three different builds. Build one is it’s original state with simple scripts that run an AudioSource.Play() method. The Second build I will add another layer of complexity by using the Unity built in Mixer and see if there are any differences or advantages. Lastly I’ll redo the project with the Wwise API and investigate how that impacts build size and project complexity and weigh it up against the previous two builds. Mostly I’m looking for difference in performance between the three builds, build size and complexity, and weighing that up against ease of implementation and flexibility.

I refreshed an old project called “MusicVisualiser”that I started for my Five Games in Ten Weeks Challenge. The game is like a singing solar system. There is a bunch of “planets” in the night sky that play a set piece of music when clicked. It’s a really simple concept and project but I think it will work for this comparison as the parameters can be limited to just a few audio tracks but we can play with spacing and roll-off and other advanced audio features.

Let’s have a look at the game first.

These “planets” are simple native Unity sphere meshes with an Audio Source component and a particle system that’s triggered when it’s clicked. You can see in the Audio Source that we are not using a Mixer for Output and all the Audio sources compete for resources and play at their default volume and priority.

The PlayMe script just takes in the AudioSource and plays it:

   public AudioSource my_sound;
  if (Input.GetMouseButtonDown(0))
      RaycastHit hitInfo;
      target = GetClickedObject(out hitInfo);
      if (target != null && target.name == my_name)
          _mouseState = true;
          screenSpace = Camera.main.WorldToScreenPoint(target.transform.position);
          offset = target.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));

          my_sound.Play();   // This is the Audio Component!
          var expl1 = GetComponent<ParticleSystem>();

Pretty simple right. This is what the project looks like in the Profiler when it’s running and being actively engaged with. At that point we are looking at two Audio Sources are playing:

This is the build size from the Editor Log with our Audio Files broken out:

Build Report (Audio.Play)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 1.4 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Used Assets and files from the Resources folder, sorted by uncompressed size:
204.3 kb 0.5% Assets/SomethingLurks_AAS.wav
164.5 kb 0.4% Assets/Step2Down_AAS.wav
136.9 kb 0.3% Assets/Underwater_AAS.wav
41.8 kb 0.1% Assets/M1_M12_37_ThumPiano_Aflat1.wav

Unity Audio with Mixer

Now we add in the Mixer component to the project:

Then add a couple of Channels to the Mixer to split the audio between left and right. Then the Audio Sources are dropped into one or another of the Mixer Channels:

Adding the Mixer as the Output source

Next for bit more interest I added some effects in the Mixer. Here is where we see the advantages of using the Unity Mixer. Sounds can be manipulated in complex ways and the Audio Output chain be defined with presets and levels etc.

If we have a look at our Profiler while running with the new component we cannot really see any great differences. The ‘Others’ section of the CPU Usage is a bit higher and the Garbage Collector in the Memory is pumping regularly but the Audio Stats look pretty much unchanged:

Profiler Mixer

Mind you this is a fairly low utilising game so we might get wildly different stats if we were really putting the system under the pump but I’m not performance testing here just comparing run states between the two builds.

Next if we build the game and have a look at the Editor Log the only thing that’s changed here is that the “Other Assets” size is a KB higher (Complete size has not been changed):

Build Report (Mixer)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 2.3 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Unity with Wwise

Next we are going to add Wwise to the Project. This is the basic workflow. In the Wwise Launcher we register our project and on the first tab we are presented with three Hierarchies.

Project Audio Explorer in Wwise

The Master-Mixer Hierarchy – does what it says.
The Actor-Mixor Hierarchy – where most of your game audio develops (use the SoundSFX defaults).
Interactive Music Hierarchy – other stuff we won’t get into.

Events Tab

The next tab along is the events tab where you link your audio to game events. You can define your event here (use the default work unit).
Once you got the event there you can associate the event with the audio in the Action List.

SoundBank Tab – this is the bit that get’s imported into your project.

Next you generate a SoundBank with Wwise that includes your audio and the code for the API calls to trigger sounds. You export that SoundBank into your game engine and link up the calls in your code.

To Get Started with Wwise

To get started make an account with Audiokinetic and download the Wwise Launcher. The Integration package for Unity can be downloaded and installed directly from the WWise Launcher.

In the Wwise Launcher there is a WWISE tab that you can install and start the application from. Once you open it up you need to register your project within the launcher so Wwise can track you 🙂 ( click on the key icon next to your Wwise project and select ‘Register your Project to obtain a License’). Wise will run in Trial mode which restricts the SoundBank content to 200 media assets and cannot be used for Commercial purposes. Pricing for licensing is on their site but this is not a sales piece so if you want it you can look it up.

There are a bunch of plugins by Audiokinetic and their partners available and also Community offerings like AudioRain a dedicated rain synth with 60 procedurally generated presets for rain. What’s not to love about that!

There is a Wwise SDK for authoring your own plugins and a Wwise API which allows you to integrate into any engine, tool or application.

Audiokinetic do certifications that covers audio integration workflows,
mixing virtual soundscapes, working with sound triggering systems, and performance optimisation :

Basically in Wwise you let the Launcher do all the setting up for you. You will install the Wwise binaries from here and manage your platform versions. Projects can be integrated here and if you don’t have the necessary plugins installed the Wwise Launcher will install them for you.

Integrating the MusicVisualiser project with Wwise.
This is how big the Wwise Integration packages and binaries are.

That’s basically it for the set up of Wwise and Integration with your Project. Next up we will have a look at what this has done to the Unity Console.

Wwise in Unity

First thing we see is a bunch of errors that can be safely ignored. As we did not perform any configuration of our project in Wwise with audio files and events there was no SoundBank to generate yet.

Unity – Initial Errors can be ignored if you have not generated your SoundBank yet.

In the Unity Console we have a new tab in our editor. The Wwise Picker Tab contains all the elements of the Wwise project that have been imported with the project integration. There is also a Wwise Global Game Object in the Unity Hierarchy and all the Wwise folders in the Assets folder.

Unity Editor
The WwiseGlobal Game Object

Under the Component pull down there is a whole slew of Ak (AudioKinetic) options.

Wwise Components.
Wwise Configuration Settings.

I know there has been a lot of “show and tell” in this post but I’m going to keep going and show the process of importing the audio into the Wwise Project, creating Events, and Generating the SoundBank.

Working in Wwise

In the Wwise Project Explorer I right click on the Default Work Unit and import the audio files that were part of my project. (I’ve stripped the raw files out of my project for now and removed all the Mixer components and etc.).

Importing Audio Files into the Wwise Project.
This is what the files look like.
Right click on the file to create a new Event (which can be called in the Unity code).
Here is the event created for “Play”.
And all my “Play” events.

Finally a SoundBank is generated from which the Unity project can access the sound files through the AudioKinetic API.

Generating a SoundBank

Wwise Audio in Unity

When we go back to our Unity Editor and Refresh the Project and Generate SoudBanks we are presented with the following in the Wwise Picker. We can now access these files and and drag them on to our game objects directly. It’s that simple. Drag a sound from the Picker onto a Game Object and it automagically creates a component that is immediately accessible from within the editor.

The new audio imported into the Wwise Picker.

Below the Play_Underwater_AAS event and audio file has been added to the Sphere Game Object.

The Trigger, Actions, and Callbacks can all be configured and access through the API. In my case I easily integrated the functionality I wanted with only one line change to my attached PlayMe.cs script that we looked at above. So now instead of my audio coming from the AudioSource component referenced by my_sound the audio is played by the AKSoundEngine.PostEvent.

            AkSoundEngine.PostEvent("Play_Underwater_AAS", this.gameObject);

Actually getting Wwise installed and set up and integrated with my Project was very very easy but not without bumps. It takes a very long time for packages to download and I had a bit of trouble upgrading my Wwise Launcher from an old version (it got stuck! and I had to remove it by hand and re-install). When I did have issues I got some very excellent help from AudioKinetic and after logging a case was emailed directly by a real person (which honestly was so surprising and wonderful to get that kind of support from a company when I’m on a trial license with no formal support agreement or rights).

So lets have a look at the differences in performance and package size. The first thing you notice with the Profiler below is that there is very little difference in performance but we can no longer see our audio stats as it’s been abstracted away from the Unity Engine. The Graph still shows the resources being used by Audio and the Total Audio CPU seems to be up to a third lower than the native Unity Audio statistics. It looks like it’s being clamped at just over 1.2. MB instead of regular peaks over 3 MB.

Profiler with Wwise Audio running.

The Build Report is only a couple of MB larger for the total project size:

Build Report
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 0.0 kb 0.0%
Shaders 188.0 kb 0.5%
Other Assets 7.3 kb 0.0%
Levels 38.5 kb 0.1%
Scripts 1.3 mb 3.1%
Included DLLs 3.9 mb 9.7%
File headers 13.2 kb 0.0%
Complete size 40.5 mb 100.0%

Basically a 2 MB difference! The Sounds have been extracted away as a file in the Build Report and we assume they are now part of “Other Assets” above.

I’m kinda blown away by how how little additional file size there is to the build considering the additional libraries code and available complexity that Wwise adds. There is literally a plethora of options and effects that we can play with in the Wwise package. It’s a bit like the excitement I got after the install of my first real Audio DAW. The scope is part boggling and part fantastical wonder at where we can go next. (Audio does get me unusually stimulated but that’s to be expected and tempered accordingly).

The questions I wanted to answer with this whole experiment was 1. Would including an audio middleware like Wwise make my Project more complex and difficult to manage? 2. Would the added Package make my build much larger? and 3. Would the performance of the Audio package be as good as the simple Unity Audio API? The answers are: No. No, and Yes. So I’m pretty happy with that and if the cost point of using the licensed version of Wwise is balanced out against the advantages of using it in the total cost of the Project then I would most definitely one hundred percent go for it.

Preparing to Draw

Hi Gene here…

This post is about preparing for a good drawing. Drawing is the fundamental skill for all (or most) forms of art. I’d like to be better at it but I really only just get by…

But still you have to prepare to do your best otherwise you are setting yourself up to fail. So this is the process or workflow that I find is the best for me. I’ll take you through the process and use a rough example drawing as we go.

A lot of these ideas come from reading Andrew Loomis and Walt Stanchfield. (I cannot recommend Gesture Drawing for Animation enough for reading about drawing rather than actually doing drawing.)

I often think of this quote (well paraphrase…) from Andrew Loomis when I start out to draw something:

“You must have a desire to give an excellent personal demonstration of ‘Ability’ coupled with a capacity for unlimited effort that hurdles the difficulties that would frustrate lukewarm enthusiasm.”

The Idea

To begin with every drawing starts out with a message or purpose or job to do. The Idea or Emotion of the Drawing. First and foremost you have to draw an idea. Every object that you put in your drawing is an elaboration of that idea.
Your idea has to be an action (or verb – a “doing” word) but the vehicles of that action are the things/objects in your drawing. Those things can be a figure, ten figures, a dog, a house, a tree, a swirling galaxy, or whatever.
If it’s a figure then the pose, the anatomical structure, etc. have to portray that idea. In every drawing you have to find that emotion of the idea. It’s a bit of a nebulous concept but I don’t have any other way to describe it.

For example in figure drawing the essence of the idea is all the outward manifestations of that internal emotion. Every moving part and direction portray the motive and mood of the drawing. Your character has to be responding characteristically to some real or imaginary motivation.
To quote Stanchfield:
“These are basic human emotions such as joy, sorrow, anger, tenderness, submission, domination, fear, surprise, distress, disgust, contempt, and shame.”

The second part to this idea is the story. What happens next. There’s no need for a whole story to be crammed into one drawing all you need is you figure doing something or reacting to something in a “characteristic” way for who they are supposed to be.

Preparation to Draw

As I said before you have to prepare to make a good drawing.
It usually doesn’t just happen if you just start drawing.

This is the best process I’ve found that does just that.

It starts with Mental Preparation or Rough Sketching.
You have to answer these sorts of questions about what you want to draw:
What is the idea?
What is your pose?
Is it the extreme of the action?
Is there an action and a re-action?
What is the visual depth?
Is there a primary and secondary action?
What is the “stage” for the action?
What is the anticipation? (What is just about to happen?)
Will you use caricature?
What details will you include?
What objects will you use?
Do the objects have a texture?

Once you have worked your way through those questions try starting your first drawing.

This one simplifies your idea and starts nutting out the technical execution.

This is the first sketch for my example drawing.

What is the idea? Piano Player immersed in playing. The idea is total absorbtion in the music.

What is your pose? Sitting one leg up tapping – hands flying. I want every action to be reinforcing that one-ness with the music.

Is it the extreme of the action? Not in the first sketch I did – that right hand could be up higher with the fingers poise like an eagle about to strike. He is supposed to be immersed in the action so his head could be down further or looking at the sky. The left leg is supposed to be horizontal and then on tip toes. Maybe it should poke out more to the left so you can see that outline instead of being hidden in the foreshortening.

Is there an action and a re-action? Not really – that right arm really needs to look like its at the top of it’s upward trajectory and is about to slam down. The shoulder could be either more hunched or raised up. The other shoulder needs to be stretched out like he’s really reaching for the low note. The tapping foot needs to be up as well and just about to come down. The left foot needs to be jittering about and only just holding his balance on that stool.

What is the visual depth? In the sketch it’s quite shallow. No background and a very close middle ground.

Is there a primary and secondary action? The primary is that hand. The secondary is the repeat of that in the foot and the hunching or lifting of the body.

What is the “stage” for the action? Is this a bar in a western or a jazz club or a luxury penthouse or a garrette? I think I’ll go for a down and out garrette. A total slum of a place that he is escaping with the music. I think I will change the format from landscape to portrait to hem him in and make room for a window.

What is the anticipation? (What is just about to happen?) His right hand is just about to crash down and peal out the most amazing lick while the left hand pumps the bass notes. The jittering and stomping foot are like the rhythm section.

Will you use caricature? I don’t think so.

What details will you include? What objects will you use?
Whisky glass! Shadows! Mood lighting. Other people? I don’t think so it’s all about him. Cigarette ash. Old stool and table lamp. Add a broken window and sliced up blinds behind him with a crappy part of the city and the moon overhead.

Do the objects have a texture? Woody piano, dirty floor,

This is where I got the inspiration the second mannequin looked like he was playing the keyboard.

First Drawing

Aim for Simplification.
Shapes and composition.
What are the most basic shapes (try and limit it down to three or max six) use the square, the rectangle, the circle or ellipse, and triangles.
Define the Scale and point of view.
(Which perspective are you using? How many vanishing points?)
Is there a Direction (or Flow)? (Beat or Rhythm.)
Is there Tension? Is it Extreme? (Use extreme poses and balance action and reaction to create tension.)
Where is the overlap? Which objects are in front or behind?
What are the positive and negative shapes?
What is the extreme pose? This usually means the farthermost extension of some pose just prior to a change of direction.
Your drawing should show, in a flash, what is happening in the pose.
Those extremes are vital to explaining the idea.
I’ll paraphrase Stanchfield again:
If the extreme pose is missing or diluted, the drawing will deteriorate from expressive to bland or confusing or boring.
The Silhouette almost explains “Extreme,” if it is not thought of as a tracing of the outside of the figure.
The extreme pose is generated by the forces at play in a gesture (the force and thrust and tension).

This is where I start playing with the basic shapes and setting up the perspective
(I use Carapace for perspective guides)
In this version I try and tighten up the figure a bit more

Second Drawing

The Second Drawing is about mass and the solid and flexible parts of the subject. It’s also about expressing the tension of the idea:
Model the figure/character/object roughly.
Give it weight and mass. (depth and volume.)
Use planes to provide solidity.
What is the weight distribution? (If it’s a figure – how it balances itself due to what it is doing.)
Thrust and Body Language. (It usually requires a limb to be thrust out – a hip thrust, or shoulder shrugged up, or knees apart, or arms out.)
Tension and Counterpart. (Whenever one member of the body moves set up a counter move with its counterpart.
Tension is captured when one elbow is working against the other or one knee against the other.
Feet, hands, hips and shoulders should always be in counter position.
Never draw one part of the body without drawing the counter move of its opposite at the same time …. never.
Use the solid and flexible parts of the body as the basis for the angles that portray the action.

Blocking in the main shapes in the body.
Starting to look OK

Third Drawing

Sometimes I’ll do a third drawing (or incorporate it into the second). This one concentrates on the line:
Define the line and silhouette.
Use arcs to define movement (and follow through).
Split it up by straights and curves.
Straights and curves when used logically can emphasise and clarify the gesture.
Straights and curves can be used for “squash” and “stretch”.
Further define the direction of the drawing – make all the elements come together to define the idea.

Concentrating on the lines

Fourth Drawing

This one is all about Perspective and Anatomy. Use it. Tighten it. Get your straights and curves to follow it.
Use Reference images and get it right.
Draw the bones first (in perspective) or a rough skeleton. Get the perspective right now. Then do surface form.
Model the muscles or flesh.
Focus down on parts.
Also textures – what parts have what texture or shading etc.

These are some references I used
Grey scale painting
Starting to Colour

Fifth Drawing

Draw everything again!
But this time picking the best bits of all drawings.
Concentrate on line quality.
Concentrate on tone.
Concentrate on light.


Finally a few notes about Drawing from Life.
Everywhere you go take a sketchbook.

When you draw try to first concentrate on color.
Then switch to dark and light (tone or texture),
then to masses,
then to the three-dimensional qualities of things near and far.
Now, try to see all of those things at once.

Finally an inspirational quote from Stanchfield:
“Carry a sketch book—a cheap one so you won’t worry about wasting a page. Sketch in the underground, while watching television, in pubs, at horse shows. Sports events are especially fun to sketch— boxing matches, football games, etc. Draw constantly. Interest in life will grow. Ability to solve drawing problems will be sharpened. Creative juices will surge. Healing fluids will flow throughout your body. An eagerness for life and
experience and growth will crowd out all feelings of ennui and disinterest.
Where are you going to get all this energy, you ask? Realize that the human body is like a dynamo, it is an energy producing machine. The more you use up its energy, the more it produces. A work-related pastime like sketching is a positive activity. Inactivity, especially in your chosen field, is a negative. Negativity is heavy, cumbersome, debilitating, unproductive and totally to be avoided. Take a positive step today. Buy a sketch book and a pen (more permanent than pencil), make a little rectangle on the page and fill it with a simple composition.”

It starts with Mental Preparation or Rough Sketching.


This is Carapace a tool designed by Epic Games and made available for free. I find it very useful. The link on their website is broken but if you search for it it’s still around. One source you can get it is here: https://www.florianhaeckh.com/blog/carapace

Unity 2D Curves using Triangles

Hi Xander here….

I know I shouldn’t be spending time doing this sort of stuff when I got games to make but I got really sidetracked with this little brain boiler. I got the idea while doing some maths research and came across an image of a cat’s cradle spun in a triangle. The way the lines joined made a perfect curve and I really liked the idea of doing something like that for making custom curves in games. I know the idea is probably not original and there has got to be some better implementations out there but once my noodle started working on this I got a little obsessed with seeing it through to the end.

I have written before about making curved movement by using sin functions and still think that’s a pretty cool way to do it. You can read about it here: (Fly Birdy Fly! 2D Curved Movement in Unity). But this is a much more intuitive way to get the perfect curve you want and very easy to plot and track the path of movement without having to guess.

This is how it works…

You take the three points of a triangle. I was thinking of something like a cannon shot, or lobbed object, or a flying arrow to start with so I called them Source, Height and Target. You measure the distance between those points and make lines to form a triangle. Then you cut those lines into equal points and start joining one point on one line to another point on the other line all the way down the length. It’s easier to explain in an image:

Building a Triangle and “Cat’s Cradle” lines to make a curve!

Now for the Mathy part… Once you get those lines drawn you use algebra to find the intersection point of one line and the next to get your curved path! Every additional line crosses the one before and by finding that point where they cross you get a list of points that make a curve.

Simple curves only need a few lines.

Five Lines

The more lines you use the smoother your line is…

Ten Lines
Twenty Lines

Start moving around those points of the triangle and it becomes really easy in the Unity Editor to Map and draw custom curves. This kind of blew my mind.

Different Types of Curves

Here we have a number of different curves all just by making a few tweaks to the position of those three points of the triangle. I’ve used the intersecting points to draw a parabolic line on the game scene below.

Here are few of the same images zoomed in (in case you are reading on your phone).

The Code

I’ll put the full script at the bottom of the post but for now I’ll work through the code a little bit.

If you want to copy the script you need to attach it to a GameObject that you want to move (or if you want to draw lines you need to attach it to a GameObject with a Line Renderer).

The script has a number of check boxes exposed in the editor which lets you control the movement and drawing functions as well as resetting and applying changes after moving the triangle’s points.

The only other variable that you can play with is the timeToHit float. This number controls how many lines you want to use to create the curve. Remember: The more lines the smoother the movement but the higher processing. (That said I’ve yet to do any serious profiling of the script but haven’t found any real performance hits yet).

Much of everything else is public so you can see what’s going on inside all the Lists and Arrays.

… … … (Editor View)

Defining the Triangle

First of all we get the positions of the three triangle points and find the length (Magnitude) of the lines between them using normal Vector maths.

Then we divide those lines by the number of strings we want to have (timeToHit) and work out the relative size of each one:

        Vector3 X_line = source - target;  
        X_line_length = Vector3.Magnitude(X_line);
        Vector3 Y_line = height - source;
        Y_line_length = Vector3.Magnitude(Y_line);
        Vector3 Y_Negline = target - height;
        Y_Negline_length = Vector3.Magnitude(Y_Negline);

        X_line_bit_x = (height.x - source.x ) / timeToHit;
        X_line_bit_y = (height.y - source.y) / timeToHit;
        Negline_bit_x = (target.x - height.x) / timeToHit;
        Negline_bit_y = (height.y - target.y) / timeToHit;

Get the Points Along Each Line

Next we iterate through all the points on the lines and make a pair of Lists (one for the forward or positively sloping line and one for the negatively sloped line):

        for (int i = 0; i < timeToHit + 1; i++)
            P_lines.Add(new Vector3(Px, Py, 0f));
            Px += X_line_bit_x;
            Py += X_line_bit_y;

            Q_lines.Add(new Vector3(Qx, Qy, 0f));
            Qx += Negline_bit_x;
            Qy -= Negline_bit_y;

Get Intersection Points

Getting the intersection points was much easier to do in 2D but is totally achievable if you wanted to extend it to 3D. We pass in our start and end points on each line (x and y coordinates) and return the intersection point (and convert it back to a Vector3):

            myPoint = findIntersectionPoints(
                (new Vector2(P_lines[i].x, P_lines[i].y)), 
                (new Vector2(Q_lines[i].x, Q_lines[i].y)),
                (new Vector2(P_lines[bc].x, P_lines[bc].y)), 
                (new Vector2 (Q_lines[bc].x, Q_lines[bc].y)));
            Vector3 myPoint_3 = new Vector3(myPoint.x, myPoint.y, 0f);

(If you want to do more than idly read about this stuff have a look at Math Open Ref for more information on the functions for finding the intersection of two lines. I promise it’s actually really interesting.)

The maths bit:

float P1 =(Line2Point2.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
        - (Line2Point2.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x);

float P2 = ((Line1Point1.x - Line2Point1.x) * (Line1Point2.y -Line1Point1.y)
 - (Line1Point1.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x)) / P1;

return new Vector2(
            Line2Point1.x + (Line2Point2.x - Line2Point1.x) * P2,
            Line2Point1.y + (Line2Point2.y - Line2Point1.y) * P2);

That’s about it for the tricky stuff. There is a function to draw a line along the curved path and a function to move the attached object along the path as well. Add in a few Gui functions for displaying the pretty stuff in the scene view and you are done.

Moving the Green Sphere

This is an example of the script running in the editor that shows the scene view with the OnGui helper lines and then switches to the game view where I use the function to draw a curve and then move the green sphere along that path.

Full Script:

Here is the full script…enjoy!

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CurveFunction : MonoBehaviour {
public bool resetMe;    // Use these to manage the screen display
public bool updateMe;
public bool drawMe;
public bool moveMe;
public GameObject Source;  // The three points of the triangle
public GameObject Target;
public GameObject Height;
public Vector3 source;  // The three points of the triangle
public Vector3 target;
public Vector3 height;
public float timeToHit;     // A variable use to split the lines of the triangle into equal parts
public int targetreached = 0;
public float X_line_length;     // The length of the horizontal line between source and target
public float Y_line_length;     // length from source to height
public float Y_Negline_length;  // length from height to target (Negative slope of the triangle)
public float X_line_bit_x;      // the x and (below) y points of the X_Line.  
public float X_line_bit_y;
public float Negline_bit_x;     // the x and (below) y points of the Negline.
public float Negline_bit_y;
public float[] X_line_bit_xs;
public float[] X_line_bit_ys;
public float[] Negline_bit_ys;
public List<Vector3> P_lines = new List<Vector3>();     // A List of points on the Y_Line 
public List<Vector3> Q_lines = new List<Vector3>();     // Same for the Negline
public List<Vector3> IntersectionPoints = new List<Vector3>();  // Where two lines cross
public float Px;        // Used as shorthand for points on the lines when calculating
public float Py;
public float Qx;
public float Qy;
public bool isFound;
public float speed;         // Used for Draw function
public LineRenderer lineRend;
public int bc;
// Use this for initialization
void Start () {
source = Source.transform.position;
height = Height.transform.position;
target = Target.transform.position;
Px = source.x;
Py = source.y;
Qx = height.x;
Qy = height.y;
// Update is called once per frame
void Update () {
if (updateMe)
updateMe = false;
if (moveMe)
if (drawMe)
if (resetMe)
void getPointsOnTriangle ()
source = Source.transform.position;
height = Height.transform.position;
target = Target.transform.position;
// Define the lines of the triangle and get their lengths
Vector3 X_line = source - target; 
X_line_length = Vector3.Magnitude(X_line);
Vector3 Y_line = height - source;
Y_line_length = Vector3.Magnitude(Y_line);
Vector3 Y_Negline = target - height;
Y_Negline_length = Vector3.Magnitude(Y_Negline);
// Time to hit is not really a time but an increment of how many times we want to cut the line into 
// chunks to make the lines from. The more lines the better the curve points but more processing.
X_line_bit_x = (height.x - source.x ) / timeToHit;
X_line_bit_y = (height.y - source.y) / timeToHit;
Negline_bit_x = (target.x - height.x) / timeToHit;
Negline_bit_y = (height.y - target.y) / timeToHit;
// Handy handlers of the x and y values of the source and height.
Px = source.x;
Py = source.y;
Qx = height.x;
Qy = height.y;
void makeLineArrays()
for (int i = 0; i < timeToHit + 1; i++)
P_lines.Add(new Vector3(Px, Py, 0f));
Px += X_line_bit_x;
Py += X_line_bit_y;
Q_lines.Add(new Vector3(Qx, Qy, 0f));
Qx += Negline_bit_x;
Qy -= Negline_bit_y;
public void makeIntersectionPoints()
bc = 0;
Vector2 myPoint = Vector2.zero;   // It's a bit easier to do 2D. So convert.
for (int i = 0; i < timeToHit; i++)
if (bc < timeToHit)
myPoint = findIntersectionPoints(
(new Vector2(P_lines[i].x, P_lines[i].y)), 
(new Vector2(Q_lines[i].x, Q_lines[i].y)),
(new Vector2(P_lines[bc].x, P_lines[bc].y)), 
(new Vector2 (Q_lines[bc].x, Q_lines[bc].y)));
Vector3 myPoint_3 = new Vector3(myPoint.x, myPoint.y, 0f);
public Vector2 findIntersectionPoints(Vector2 Line1Point1, Vector2 Line1Point2, Vector2 Line2Point1, Vector2 Line2Point2)
float P1 = (Line2Point2.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
- (Line2Point2.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x);
float P2 = ((Line1Point1.x - Line2Point1.x) * (Line1Point2.y - Line1Point1.y)
- (Line1Point1.y - Line2Point1.y) * (Line1Point2.x - Line1Point1.x)) / P1;
return new Vector2(
Line2Point1.x + (Line2Point2.x - Line2Point1.x) * P2,
Line2Point1.y + (Line2Point2.y - Line2Point1.y) * P2
/// Code modified from: https://blog.dakwamine.fr/?p=1943  
/// (Thanks for the leg up!)
public void drawLines()
lineRend.positionCount = 0;
Vector3[] positions = new Vector3[Mathf.RoundToInt(timeToHit) + 1];
for (int i = 0; i < timeToHit + 1; i++)
positions[i] = IntersectionPoints[i];  // Draws the path
lineRend.positionCount = positions.Length;
drawMe = false;
public void MoveMe()
if (transform.position != IntersectionPoints[targetreached])
float step = speed * Time.deltaTime;
transform.position = Vector3.MoveTowards(transform.position, IntersectionPoints[targetreached], step);
if (targetreached != IntersectionPoints.Count)
if (transform.position == Target.transform.position)
moveMe = false;
public void ResetMe()
transform.position = source;
targetreached = 0;
X_line_length = 0;
Y_line_length = 0;
Y_Negline_length = 0;
X_line_bit_x = 0;
X_line_bit_y = 0;
Negline_bit_x = 0;
Negline_bit_y = 0;
Px = 0;
Py = 0;
Qx = 0;
Qy = 0;
moveMe = false;
resetMe = false;
void OnGUI()
GUI.Label(new Rect(10, 10, 140, 20), "Source: " + source);
GUI.Label(new Rect(10, 30, 140, 20), "Target: " + target);
GUI.Label(new Rect(10, 50, 140, 20), "Height: " + height);
void OnDrawGizmos()
Gizmos.color = Color.red;
Gizmos.DrawWireSphere(source, 0.2f);
Gizmos.DrawWireSphere(target, 0.2f);
Gizmos.DrawWireSphere(height, 0.2f);
Gizmos.color = Color.green;
Gizmos.DrawLine(source, target);
Gizmos.DrawLine(source, height);
Gizmos.DrawLine(height, target);
UnityEditor.Handles.Label(source, "SOURCE");
UnityEditor.Handles.Label(target, "TARGET");
UnityEditor.Handles.Label(height, "HEIGHT");
Gizmos.color = Color.yellow;
// Uncomment to see lines in editor
for (int i = 0; i < timeToHit + 1; i++)
Gizmos.DrawLine(P_lines[i], Q_lines[i]);

Xander out.

The Coin Flip

Ah the coin flip! Simple easy and fun. The mechanism is a platformer mainstay. You run over a spinning coin (it glitters, it calls you) it pops into the air, and it’s yours!

The Endless Elevator Coin Flip !

This is how we do it…

There is a Rigidbody and Collider on both the coin and the player character. You can clearly see the spinning frame of the coin collider in the .gif above.

The collider acts as a trigger which is being listened for by our script which executes the “pop”.

The coin has a spinning script (and also a magnetic feature as a bonus for later).

The player has a couple of behaviours that handles the trigger and action.

The coin scripts – note the use of the slider to get the spinning speed just right.

Here we have the coin’s Rigidbody and Collider settings:

The Rigidbody isKinematic and the Collider is a Trigger

This is the script we use for spinning:

public class spinCoin : MonoBehaviour {
[Range(0.0F, 500.0F)]
public float speed;
// Update is called once per frame
void Update () {
transform.Rotate(Vector3.up * speed * Time.deltaTime);

Simple and sweet.

This is the function that that handles the collision and the pop into the air! (It’s part of our character behaviours).

    void OnTriggerEnter(Collider otherObj)
if (otherObj.name == "Coin(Clone)")
var coin_txt = coins.ToString();
coinsText.text = "Coins: " + coin_txt;
Rigidbody riji = otherObj.GetComponentInParent<Rigidbody>();
riji.useGravity = true;
riji.isKinematic = false;
riji.AddForce(Vector3.up * 40f,  ForceMode.Impulse);
Destroy(otherObj.gameObject, 0.4f);

First of all we are incrementing our coin total variables and screen display.

The mesh for the coin is part of a child component so we need to call the Rigidbody attached to the parent object.

We set gravity to true so that it falls back down after adding the force and set isKinematic to false so we can use it’s mass to fall.

After a very short flight we destroy it (0.4 seconds).

As an added bonus here is the other coin behaviour for when a magnet power up is used in the game.

    private void OnTriggerEnter(Collider col)
colName = col.gameObject.name;
float step = speed * Time.deltaTime; // calculate distance to move
if (colName == "chr_spy2Paintedv2")
transform.position = Vector3.MoveTowards(transform.position, col.gameObject.transform.position, step);

This is when we make the collider really big on the coin. If the player gets in range of the collider then the coin moves like a magnet towards him. Which is kinda fun when there is lots of coins around and you feel like a millionaire just by standing around.

MagicaVoxel-Blender-Unity Workflow

Hi Trixie here….

We had a good break from the build cycle with the Text Adventure framework and since then have been making lots of fun headway on the main game in development Endless Elevator.

Endless Elevator is, as the name suggests, an endless runner style of game. It’s played in the vertical axis and follows the Good Cop as he scales the heights of an endless building shooting down the bad guys, climbing stairs, and catching elevators.

We have the main game functionality finished to a point so we started working on background objects and some cute little buddies for the Good Cop. It’s puppies…ain’t they cute!

This is not about the puppies though. This is about the workflow we have been using for creating assets using MagicaVoxel and making them game engine ready using Blender before importing them into Unity.

Lets start with MagicaVoxel. A 3D voxel editor that is free (no commercial license required) 8 bit and super awesome. Credits to the software are appreciated (e.g. “created by MagicaVoxel”) – like what I did there just like that! All the assets for the Endless Elevator have been made with MagicaVoxel. The walls, the floors, the furniture, and the characters.

First we model and then we paint in MagicaVoxel. For example this table lamp:

When we are done modelling and painting we export it as an .obj file that also produces a .png of the palette mesh mapping. It’s a bit like using the UV Unwrap in Blender but much harder to manually map or see.

Once we are done with MagicaVoxel if we want to optimise we import the .obj file into Blender. Blender is a terrific open source 3D modelling (and more) software. There is a trade-off here… we use Blender to lower the poly count on complex objects by using the decimate modifier. This modifier basically takes a parameter in your vertexes (like the angle between edges) and reduces the vertex count by simplifying the model. You see the problem with MagicaVoxel is that it created edges from a fixed point which can make lots of thin triangles.

Have a look at this model of the lamp imported into Blender:

You can see all the sharp angles of the triangles there. This is how MagicaVoxel works under the hood and it’s great for the internal workings of that program and is very efficient when working in that app but it sucks a bit for making complex models that you want to import into a game engine.

This is the Decimate modifier in Blender that we use to simplify this topology. We tell the modifier to take the Planar (faces) and simplify anything that has an angle under 25 degree.

We are left with something like this: (below)

This is much simpler and super easy for the game engine to understand and render.

The trade off here is that when you decimate all the vertices you lose your UV mapping for the paint work you might have done with MagicaVoxel. These are the limitations of working with awesome freeware. Sure they are awesome but if you shell out a few hundred (or less in some cases) for different Voxel modelling software you can get away with not having to work around these problems. But welcome to the world of no-budget game making. Hacking through the workarounds is part of the fun. Plus you actually learn a bit while you are working it out.

So in our game Endless Elevator we use a lot of small models (ie. not complex) and import them straight from MagicaVoxel and use their paint system and resulting exported image files for making the materials (albedo component). If we have more complex models that we want to simplify, like the walls and lifts in the surrounding building, then we import into Blender and do some optimising. Once the complex models have been optimised then we unmask them and paint the UV’s using GIMP. Next when they are imported into Unity we either add a material with the coloured UV mask we painted up or use Unity’s in built colour system for large areas.

There is another problem with using MagicaVoxel to make your game assets and that is on more complex models the “normal” of a face are often flipped the wrong way round. This one is kind of easy to spot and not that much fun to re-mediate. If you have a look at our character below you can see his shadow being projected on to the wall behind him.

Oops – he’s got big holes in him. You cannot see it on the model and it’s really only a problem if you are looking for it and using lots of hard lighting. In a 3D model each mesh forms a face and that face has two sides. In Unity’s default Shader only one face (the forward one) is rendered. So when MagicVoxel flips a few faces here and there (they are very small usually) you get these gaps that do not block the light in a shadow. It’s pretty hard to show in an image but what we have below is the model imported into Blender where we can expose the normals (the direction the face is facing!) and see the issue. In this image we have clipped the camera hard so that we can see into the cavity inside the model. Normals show up as light blue lines. You can see a few of them poking the wrong way into the center of the model instead of the outside. You can play around with the “flip normals” feature in Blender to fix these issues but it’s a lot of fiddling that frankly I have not had the patience or need to do yet!

So these are just a few of the issues and workarounds we use with this workflow – I hope you enjoy reading about it and if you have any questions feel free to comment 🙂

Trixie out.

Unity Hinge Joint

Hi Xander here…

For our game Endless Elevator, which is in development, we have a bad guy who is a knife thrower. I know nasty. In keeping with the blocky style of the characters in the game only his throwing arm moves and the rest of him is rigid. We decided to use the Unity inbuilt Hinge Joint and “spring” feature to simulate the throwing action. It turned out to be really easy to implement but hard to control perfectly. This is the story of how it all hangs together.

This guy below with the creepy eyes and beard is our knife throwing guy. You can see the top level empty Game Object called KnifeSpy_Package and two child objects (one for his body mesh the other for his arm which is separate). You can just make out the orange arrow of the Hinge Joint near his shoulder but more of that below.

This is his arm object. He’s holding a knife now… but soon we are going to teach him how to throw it (kinda).

You can see the Hinge Joint attached to the arm object here as a red arc around the Z axis. The arc of the Hinge Joint has been limited to just the angle that he needs to raise the arm and bring it back down in a throwing action.

Here is what the Hinge Joint looks like in the Editor. That Connected Body is the main figure of the character. The arm mesh that this script is attached to also has a Rigidbody component and must be set to “Use Gravity” for the Spring to work.

You can see where we set the limits for the arm axis in the bottom of the object there. We access the Use Motor boolean from our script to turn that feature on and off. When it’s on a spring winds up the arm to it’s firing position and when he throws that spring is released shooting his arm back down in a throwing arc.

This is what our script looks like in the editor:

The Target Angle is the height of the arm as it raises to throw. When the Player is in range and we are facing him if the arm has been raised above the target angle we can throw the knife. We can use this to tweak the throw. You can see in the example below our arm doesn’t really come up high enough so we can use this setting to fix what it looks like on the fly.

The ‘X’ is exposed in the editor to help with that process so that we can understand what’s going on with that setting without having to click around in the editor to see it against the arm transform.

The Knife Prefab is the knife mesh and the Knife Transform is an empty game object used to define the spot where we instantiate the Knife Prefab.

The Knife is instantiated with some Force and there are booleans to control if we can shoot and if the wait between shoots has completed.

This is what the script looks like as code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class KnifeHinge : MonoBehaviour {
private HingeJoint hingeKnife;
public Vector3 targetAngle;
public float x;    // Debug in Editor
public GameObject knifePrefab;
public GameObject knifeTransform;
public float force;
public bool canShoot;
public bool waitDone;
private Transform playerTransform;
public Transform parentTransform;
// Use this for initialization
void Start () {
hingeKnife = GetComponent<HingeJoint>();
playerTransform = GameObject.FindWithTag("Player").transform;
// Update is called once per frame
void Update () {
if (Mathf.Abs(playerTransform.transform.position.y - (parentTransform.transform.position.y)) < 1)  // If the player is on the same level as you
if (Mathf.Abs(playerTransform.transform.position.x - parentTransform.transform.position.x) < 8)  // If he is within 16 units from you
var lookPos = playerTransform.position;
var rotation = Quaternion.LookRotation(lookPos);
if (waitDone)
canShoot = true; // Can shoot is only true if you are looking at the Player (on the same level and 16 units away)
void FixedUpdate()
x = hingeKnife.transform.rotation.eulerAngles.x;   // this is for debugging the angle of the arm and hinge in the editor easily
if (canShoot)
if (hingeKnife.transform.rotation.eulerAngles.x > targetAngle.x && hingeKnife.transform.rotation.eulerAngles.x < (targetAngle.x + 10f))  // this is set to 295 it goes up to 297 (If your arm is all the way up)
hingeKnife.useMotor = false;
var theKnife = (GameObject)Instantiate(knifePrefab, knifeTransform.transform.position, knifeTransform.transform.rotation);
theKnife.GetComponent<Rigidbody>().AddForce(-force, 0, 0, ForceMode.Impulse);
Destroy(theKnife, 2.0f);
canShoot = false;
waitDone = false;
StartCoroutine(WaitAround(2f));  // shoot every 2 seconds
private IEnumerator WaitAround(float waitTime)
yield return new WaitForSeconds(waitTime);
waitDone = true;
hingeKnife.useMotor = true;

This is what the whole thing looks like put together:

As I mentioned above there is a bit of tweaking to get it to look right – but this post is about the process of putting everything together and how the components work to achieve the effect.

This is what it looks like after the tweaking:

I hope you found this interesting enough – if you did and want to read more about this stuff I did a post a few weeks back about how we do the guns in this game Endless Elevator: