A collection of articles, apps, and other digital resources, thematically tied to the subjects of art, design, programming and general philosophy. All content created by "The Imp".

This website does not use cookies, because they are high in sugar and saturated fat. (Yes, they are tasty too, I know, I know...)

ThreeJS hit detection obstacle course tutorial

26/09/2020

Play demo!

(Please note: No mobile support for this app)

ThreeJS is a great tool for rendering 3D graphics in the browser. However, its not really a fully-fledged "Game engine". Despite a rich collection of features (Positional audio, support for Virtual Reality, countless loaders), and extensive examples and utilities that are included in the ThreeJS repository, anyone who eagerly conspires to use it for creating 3D browser games, will likely find that there are a couple of foundational things that they need to source elsewhere, or create themselves, to create a game-like experience. Please note, that this is not to speak ill of the ThreeJS project - its simply that some things are not within its scope.

When sitting down to create a game a few weeks ago, I found myself creating a small collection of foundational utilities such as these. Rather than put them into an official Framework, I thought it would be more useful to keep these code figures loose, construct a little sim that showcases them, and write an article that explains them - the idea being that people can manually splice them into their own projects granularly, and tweak them as they see fit.

The most notable feature of the system is an environment collision utility.

It also includes:

  • A lazyman's preloader
  • A unified user input abstraction layer (which combines mouse, keyboard and gamepad input into one API)
  • A hybrid state-and-event module intercommunication system.

The code for the whole sim can be found here, in a public repository.

To play the demo, use the button above.

Are you sitting comfortably?

Environment collision system

Detailed physics simulations are quite cumbersome things. They are difficult to program, and computationally expensive to run. While there are a number of physics engines available in Javascript, I've found them all to be a bit sluggish, and less efficient than I would like.

However, since I started experimenting with ThreeJS, I've found myself wanting to use it to create games and simulations that involve some degree of environmental collision detection. Crucially, my interest has not been to create games that involve a large amount of physics, but just to create games where:

  1. A player cannot walk through walls
  2. A player cannot fall through floors

I've made a handful of attempts at implementing a system that would achieve this behaviour, but none of them have really worked satisfactorially. Until now.

This leading section of this article will now describe how the environment collision system (ECS) works.

In terms of how the ECS fits into the app architecture, it is a utility class. This is its API shape:

As you can see, its a class that accepts two arrays of meshes in its constructor: Meshes that should be treated as floors, and meshes that should be treated as walls. Now of course in reality, there is no clear distinction between a steep floor and a shallow-angled wall, but for the purposes of the simulation, the distinction has a very reasonable integrity, and will simplify the environment collision system logic greatly.

Once you've constructed an instance of the Planeclamp class, you can then invoke it's getSafePosition method, to transform a starting position and an intended position into an attenuated position. Being the discerning reader that you are, you will have deduced that the attenuated position is the intended position, having been changed a bit if any collisions have been detected by the utility.

This is how it can be used in the game loop, to ensure a player does not pass through walls or floors:

And thats about it! If you would like to use this utility, you can find it in the repository. But if you would like to know more about the logic behind its workings, read on.

The Planeclamp.getSafePosition method works out a safe position in two stages. Firstly, it uses a vertical raycaster to take a look at what is underneath the player, to then see if it should stop the player from moving downwards any further. Secondly, it uses horizontal raycasters to see if it should stop the player from moving horizontally. Lets look at the vertical constraint procedure first - this is the more simple of the two steps.

And thats it for vertical environment constraints. Simples!

The horizontal environment constraint system is a bit more complex. But in its essence, what it does is:

  1. Work out the horizontal direction the player is travelling in. In olde worlde terms, this can be thought of as North, South, SouthEast, SouthSouthWest etc, but in ThreeJS it is represented by a Vector.
  2. Cast a ray in the direction that the player is travelling in.
  3. Use the ray to find out if allowing the players intended position would cause the player to pass through any of the wall meshes.

And it is at this point that the horizontal ECS becomes more complex than the vertical ECS. With the vertical ECS, if a collision happens, we can just set the players Y position to the Y position of the point at which the collision happened - effectively halting the players Y movement. However, if we did this for horizontal movement, it would make for a very frustrating game experience.

If the player was running head on into a wall, and was stopped dead in their tracks, this would be fine. But if the player moved into the wall at a very shallow angle, and merely grazed it, it would appear that they had "gotten stuck" on the wall, and would find themselves having to reverse away from it, and take care not to touch it again.

What we actually want to happen, is have the player's horizontal velocity attenuated, so that they move along the wall. Therefore, the horizontal ECS proceeds as follows:

  1. Obtain the normal of the surface that was collided with. (For our purposes, a normal can be described as the direction that the wall is facing)
  2. Inspect the difference between the wall normal direction, and the player's movement direction.
  3. Use the difference to work out a safe position, which is the point in space that the collision happened, incremented by a vector that is horizontally perpendicular to the wall normal, multiplied by the cross product of the players input direction and the wall normal.

Here is the code that does all this. Note that we actually shoot two rays - one from the players left side and one from the players right side - to account for the avatar's width.

Finally, at the end of the utility function, we return all that may be of interest to the consuming script:

That about covers it. Its not a bullet-proof system (no pun intended), but it is very good for simple simulations when you just want to circumscribe a players movements within a static environment.

Known limitations:

  • The hit detection system will not stop a stationary player from passing through moving walls.
  • As we work out horizontal collisions by shooting one ray from the players left side, and one from their right, objects that are thinner than the player will pass through the player.
  • The player can pass through walls if the angle of two meeting walls is less than 90 degrees.
  • It does not account for ceiling collisions, but by shooting the ray up when the player is moving up, and shooting the ray down when the player is moving down, this could be achieved.

Managing controller input

While creating this game system, it seemed to me that the best way to support multiple input devices (while keeping device-contingent conditions out of the code) was to create a keyboard event listener, a mouse event listener, a gamepad event listener, and then position all of these behind a single, consistent API.

When considering the fairly perverse differences between the ways that mouse, keyboard, and gamepad each convey user input, this seemed particularly important:

  • Keyboard key presses convey user input with events, whereas gamepad button presses convey input with a state value that must be manually inspected whenever there is a desire to obtain their state.
  • Gamepad axes and the mouse both represent a value that has a magnitude, but while gamepad axes must be audited manually, mouse movement input is an event.
  • The state of a keyboard key is either down or up, whereas some gamepad buttons are pressure sensitive, and can have a magnitude.
Input source Gamepad Gamepad Gamepad Keyboard Mouse Mouse
Control type Button Button Stick Button Button Track
Value type Binary Magnitude Magnitude Binary Binary Magnitude
Notification type state state state event event event

After some thought, I realised that all inputs, with the exception of mouse movement, could be coerced into both events and state, so that they could be accessed in a flexible, but uniform way:

Now, as the unified API represents many devices, CONTROL_NAME can't refer to a mouse button or a specific keyboard key - the letter K, for example. The approach then, is to refer to each input by the name of the in-game action that it corresponds to.

To arrive at this nice, neat, single API, the user input is funneled through several layers of processing. Hold on to your hats - here comes a lot of code. But it is pretty simple code.

First, we define constants to represent all types of action that the user can perform in the game. These are the keys that we will use when querying user input.

Next, we map a meaningful name to each of the buttons and axes on each of the device types. The code below shows this button index to button name mapping for the gamepad type. As the keyboard keys have useful names in event.code, its only really useful to do this for for the gamepad. But for consistency, I have also written mappings for the mouse and keyboard (not shown).

The mapping above, specifically, is a mapping of an X-box-like controller. In a more comprehensive system, we would have maps for other types of gamepad too.

Next, we bind our actions to the buttons and axes of each device with another mapping. This is at the heart of the construct which lets us remain agnostic to the input device - letting us refer to each input control by means of the action that it corresponds to.

Now that we have all our mappings, the next thing to do is create an input manager class for each device type - A KeyboardListener class, a MouseListener class, and a GamepadListener class. I will leave the implementation of each of these out of this article for brevity, but you can find them in the repository. For now, its enough to say that each of them extends / implements the AbstractListener class / interface, and each one works to factor out the inconsistencies between the different device types by means of idiosyncratic implementation code. The AbstractListener class is shown below.

Now, we can finally reap the rewards of this coding effort, and create an instance of a master input manager class:

And now, we can code like a king and write very simple code when we want to get axes states. (Although this type of input typically only comes from a gamepad). All variables will have a float value ranging from 0 to 1.

Very conveniently, we can get the value of a button input in exactly the same way.

Both of the variables above will return a value of between 0 and 1. But of course, if the crouch action input is coming from a keyboard, it will be either 1 or 0, as, broadly speaking, keyboard keys don't have pressure sensitivity. And if the crouch action is coming from a pressure sensitive game button, it will have a float value between 0 and 1. Sometimes having the value come through as a magnitude is desirable - a soft press could initiate a stoop, a harder press a crouch, and a firm press, a crawl. But if you want to treat that input as a binary value, the API provides a method to ensure the value you get is always a binary one:

The two variables above will now be boolean true if the magnitude of the pressure-sensitive button is 0.1 or more (i.e. it is at least 10% pressed), or false if otherwise.

That is how the API lets you work with input as auditable state.

To interact with keyboard key presses, gamepad button presses, mouse button presses as events, we use code like this:

As mentioned above, mouse movements can't be cooered into either of these two common formats, so they are exposed with their own idiosyncratic API method:

And that is it! Quod erat demonstrandum.

Module-to-module communication and state management

One of the hardest aspects of creating a game without a feature-rich game engine is keeping objects organised while still ensuring that they can communicate among one another when they need to. To clarify what I mean by game objects, I am referring to everything you think these could be: Cameras, players, computer controlled characters, the user interface, lights, terrains, sound emitters, and so on.

In the context of a similar problem - the intercommunication of web components - the mediator pattern has been proven to work very well. In such a system, all objects only know about themselves and a single, global, mediator object, and these objects only communicate with each other indirectly through dispaching events to, and subscribing to events being dispatched to, this mediator.

And in the context of web components, another approach is to use a state-centric system, such as Redux.

I spent some time considering if I could apapt either of these two patterns for game object intercommunication, and if I could, which would be the best choice.

A state based system is good as it ensures theres no repeated state or repetative event-to-state procedures inside each component - such procedures exist in reducers.

On the other hand, state-based systems may not be appropriate for simulations. A simple 3D simulation, unlike a simple (or even a complex) user interface, has a phenominal amount of state to it, which also updates far more frequently in comparison. Even just one object in a 3D simulation has 3 numerical values for scale, rotation, location, translation, and velocity. And moreover, its not uncommon for many of these values to change with every frame.

Consider: 60 frames a second * 100 game objects * 6 object properties * 3 dimensions = 108,000!

The idea of trying to run a reducer function 108,000 times a second seemed very daft, so using a redux-like state system to represent all aspects of the simulation smelled like a bad path to start down. And yet I liked the idea of having some things stored in a common state bank, so that I could avoid having duplicated state and duplicated reducers. So I made the decision to use both patterns, and work with a hybrid object that acts both as an event mediator, and a redux-like store.

It works like this...

First up, lets write the store. Its actually very simple to create a store, and notably, the code is not dissimilar from a textbook mediator.

Full implementation of a redux-like store

We can create an instance of it like so:

Now lets write some redux-like constructs so that we can use the store in the classic Redux fashion:

Store actions

Now is probably a good time to create a player object too. This will contain methods that dispatch events to the store.

Player object

Continuing with our Redux constructs, lets define a bunch of selectors, for examing the store state.

Store selectors

Finally, lets create a reducer for converting events into state.

Store reducer

Following so far? Its a pretty typical redux setup for now. But with the store implementation I've used, we can respond to both state changes and events. Responding to events instead of state is useful if we want to adjust an object that not fully represented by data in the store.

Using the mediator/store hybrid: Responding to an event

And in addition to responding to events, we can still also get the net result of all the players actions as state (using DRY, reusable store selectors, of course). Housing the novel state of a sim object, such as a player, in a global state object that is separate from the player, is useful as it allows entities other than the object in question to easily inspect that state too - without compelling the developer to perpetually come up with ways of giving Object A knowledge of Object B while avoiding the creation of a proverbial "Big ball o mud".

Using the mediator/store hybrid: Responding to a change in state

Indeed, the code above doesn't even have to reside in the player object, the bartender object or the dave object. It can actually be fully extracted and placed in a separate "Player consumption supervisor" file.

In the case of the hit detection obstacle course we have here, I use this hybrid system in the following way:

Amongst other things, events are sent when:

  • An asset loads
  • All assets finish loading
  • The simulation starts
  • A player crosses a certain point in the terrain
  • The player jumps

The actions that are triggered in response to events include:

  • Add a path to the list of loaded assets
  • Switch the UI from the load screen to the start screen
  • Change camera
  • Load a bunch of meshes to make the environemnt look different
  • Load a bunch of meshes to make the players movement constricted by new geometries
  • Change the ambient audio

Amongst other things, the global store state is used to:

  • Deduce which UI overlay to show
  • Deduce if debug visuals are enabled or disabled
  • Deduce which camera is in use
  • Deduce if the game is paused or not

Loading assets

ThreeJS provides a variety of loaders for loading images, sounds, and meshes. They are duly flexible, letting you define what should happen when an asset starts to load, loads a chunck of data, fails to load, and finishes loading all data successfully.

However, I've found that when using these loaders directly, a pattern of having a dozen of lines of code for every image, sound or mesh, can develop. (Or at least, a couple of lines that are quite long, if you ommit the callbacks). My personally preferred solution is to define an array of relative asset paths, and then write and leverage a custom class that will map over all of these paths in one go, converting each into a ThreeJS image, sound or mesh, and then notifying you when all are ready to use. In other words, its a preloader utility.

It looks like this:

Create an assets list file Let the loader run

Now whenever you want a file, provided you have access to the mediaManager object, you can just reference it by name, and you know it has already loaded.

Obtaining a file from the mediaManager

In some cases, it can be advantageous using an approach that is contrary to this: Instead of making the user to wait for everything to load before booting the app, the app is booted, and then assets are only loaded as they become needed. This approach is often appropriate for news websites, where publications compete with each other aggressively for top-spot on the Google search results, and cater to users who expect the page to load in milliseconds. However, in the context of games and simulations, this kind of approach is less common, and cause the code to become very complex quite quickly - it was deemed unncessary for this particular app.

Effects

The lines running over the surface of the walls look like they are inset in the geometry of the shape, but this is just a lighting effect achieved with bump maps. Its very easy to do in ThreeJS.

Note the bumpScale property. We can reduce the magnitude of the effect by bringing this number closer to zero. If it is a positive number, the effect is an outset effect, as opposed to an inset effect.

For the portal at the end of the course, I wanted to do something to make it look cool. I could have used a custom shader, or geometry key frames to create ripples, but I wanted to keep it as performant as possible. Instead, I opted to just have a texture that rotates and scales over time.

The code for adjusting the texture over time is as follows:

Creating a more naturalistic environment

At the end of the course is an area that showcases all of the features in the context of a more naturalistic environment. Having played a certain N64 game lately, I wanted to create something that was in between serene and spooky...but it ended up just being spooky, I think. Ah well, what are you gonna do.

I used blender to model the environment. First thing to do was create the ground and the surrounding cliffs. I used an uneven ground to showcase the ability of the environment movement constraints system to handle complex floors.

I modelled a building to serve as a feature. Its a basic cuboid shape with a few cuts and extrusions.

I decided that I'd spend a little while creating a custom texture for this mesh. After unwrapping it into "islands", I exported the islands map to a file, and then assembled a collage of images over these to effectively "paint" the house. I used Krita for this. The interface is a lot nicer than Gimp, but to me, it still feels clunky and unituitive compared to Photoshop. Maybe one day Adobe will release a native version for Linux. We can only live in hope.

The building, as it appears in the simulation. It's not going to win any art and design awards, but I think it looks decidedly "OK".

For the fence next to it, I used a cunning trick to create a mesh from a bitmap.

First, take a black and white image of a fence, and increase the contrast, or apply a threshold filter, so that it is two-tone. Next, convert the bitmap to a Vector. This can be done using Photopea (a great tool for editing images in the browser), or Inkscape. (Another Linux graphics editing program with a terrible UI).

Now that we have an SVG of our fence, we can import it into Blender. As SVGs are 2D assets, it is flat when its first imported.

Converting the SVG shape from a curve to a mesh, and then extruding, we give the fence some solidity. The extrusion is emphasised here to show the depth more clearly.

Now we move it into position and duplicate it, and BAM! What a fence!

To keep the player from falling out of the game world, the last thing I do is to create a perimeter fence mesh. I give this mesh a name in Blender, so that I can refer to it in the app code, and add it to the walls array.

Lastly, a word about audio. The garden has a couple of positional audio sources - a general ambient track, with crickets, and a wolf and an owl sound I overdubbed over for effect, and a trickling water sound for the fountain in the corner. To reduce any abrupt transition when either of these looped sounds end and then restart again, for each, I use the old blending trick:

  1. Apply a fade out to the last few seconds of the audio.
  2. Move the first few seconds of the audio to the end, and apply a fade in.

This setup is shown below in Audacity.

Thats it! Corrections, questions, and advice are all welcome.

Comments (0)

Replying to: Noname

Error

Utilising this page as a billboard for marketing purposes is not allowed. Any messages posted by users that appear to be commercial in nature will be deleted, and any user found breaching this term will have their IP address reported to ICANN. This may result in their networks appearing in worldwide electronic communication blacklists.