A collection of articles, apps, and other digital resources, thematically tied to the subjects of art, design, programming and general philosophy. All content created by "The Imp".

This website does not use cookies, because they are high in sugar and saturated fat. (Yes, they are tasty too, I know, I know...)

Nimu

03/02/2019

Image: The finished application as rendered in the browser, but from an angle that is only made possible by removing positional and rotational camera restrictions through an edit of the apps configuration file. The interior is fully enclosed, but we can see into the room as threejs doesn't render the reverse side of a mesh by default.

The first challenge

Some time a couple of months ago, I, once again, became accutely interested in modelling 3D objects with Blender. While working on virtual sculptures of figures that were considerably more sophisticated than I had ever created before (yet still, in comparison to the work of many-a talented modeller out there, quite amateur), I began to wonder if I could incorperate this new output into a native web app - naturally, with the help of the widely venerated threejs library. In particular, having done more work with armatures, I began to wonder: While a rigged mesh can have its vertices deformed through the manipulation of armature bones, would it be possible to permit a user to manipulate a mesh that has been rendered in the browser through the armature by interacting with the bones via the mouse? This is something I hadn't seen examples of before.

Fast forward a few weeks, some spare time, a bunch of meal prep sessions, a couple of sculpted meshes and a few thousand lines of code, and we have a finished app. Its a diorama, depicting the face of a pitifully endearing young woman that can be pinched, grabbed and smooshed, much to the delight of anyone with a mouse and a few idle minutes that cannot be occupied with something useful.

The second challenge

I was so jubilant at the outcome, and so confident that everyone in the world would find this immensely entertaining(!) that I decided I would publicly host this app on its own domain. Being a money-loving mule, I also decided that I would do everything I could to capitalise on such a fantastic invention, and incorperate some kind of advertisment-based revenue system. My knee jerk reaction was to reach for a Google AdSense integration. I could register the app with the service, delineate some regions for adverts on a 2D layer that could be laid over the 3D viewport, and let Google do its monetization magic. But then I had a better idea: What if I could show ads within the virtual 3D space of the diorama itself? That would be much more interesting!

After some consideration, I decided it was feasible. I knew the following things to be true:

  • HTML forms can be used to upload images and related metadata.
  • Server side code can be used to process and store files and metadata.
  • 3D meshes can be skinned with any kind of image.
  • Most Payment gateways make it relatively easy for any web app to take money.
  • Most payment gateways can be leveraged in such a way that they trigger actions on an apps server when a payment completes. (Such as an action that moves an uploaded image from an area where it lies dormant to an area where it is actively used to texture a 3D mesh.)

But I was cognizant of a few threats:

  • Would it really be possible to hot-swap texture images when new ones were uploaded by site-visitors? Is the exported mesh in a format thats amenable to edits? Or, potentially even more suitably, does the collada format I was using tie textures to faces via links and references rather than through the embedding of images? This would actually mitigate the need to make edits to the collada mesh file itself.1
  • Considering that the images would be updated regularly, could I ensure that the images would not be (over-aggressively) cached by the browser, should someone visit the site more than once?2

Once again, I found that it all worked out with good old fashioned dogged perseverance.

In the article that follows, I'll breifly run through the techniques I used to get all the pieces to fit together. I'll touch on Blender modelling, web app build tools, threejs, a tiny, 50-line long custom web component technique, and server-side code. But before all that, take a look at the app, and familiarise yourself with how it behaves.

Above: How the app looks when it first loads on a desktop browser. The character is prominent in the foreground, and a large amount of free ad slots are available at this particular point in time. (And perhaps some time to come!) The slot on the top right has been taken...by me.

By moving the mouse around with the middle button depressed, the visitor can rotate the camera around the figure. This lets them get a better look at the figure, but also to see the adverts that have been submitted to the other poster slots. (On a touch screen device, the camera can be rotated by making a two-fingered swipe gesture).

By using the left mouse button to click and drag, (or on a touch screen device, a single-fingered swipe gesture), the site visitor can grab a part of the face and move it around. If they have enabled "freeze mode" by toggling the snowflake button in the toolbar, any part of the face that they move will stay where it is, instead of springing back to its original position, allowing the user to create composite gurns, grimaces and pouts. This concludes the tour of the main feature. Now for the UI...

The following screenshots show the journey a user goes on if they decide they want to upload an image. Like all good user interfaces, it should be quite self-explanatory. I made a decision to allow users to upload one image for free, for todays date, for each slot, if it was not already engaged. I theorised that this would have the following effects:

  • It will stop people from feeling apprehensive, like they are getting a bad deal, if they only want to have a bit of immediate fun and upload something now, for less than one day.
  • It will encourage people to purchase adspace, as they get to try before they buy.
  • It will yeild a more varied collection of submissions, as people will be able to upload daft images on a whim.
  • It will encourage people to purchase adspace, as, hopefully, the site will look less barren as the result of permitting a few free uploads.

"Wow you talk a lot", I hear you thinking. I know - you want to just try it out. Use the link below to visit the live app.

Visit the app online

Modelling

I created two 3D meshes using the Blender app. For the uninitiated, here is a good definition of "mesh", in the context of 3D modelling, courtesy of Wikipedia.

"A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling."

If you have ever created vector-based 2D graphics before, and are unclear on what a mesh is, it may help you to think of 3D a mesh as the 3D equivalent of a 2D vector graphic. The only essential difference is that a mesh is constructed across three dimensions, rather than two. They both share the same building blocks, however. Points in space (also known as vertices) are the most fundamental element. These points may then be joined together by marking lines. Lines may then be joined together by marking faces.

Just as with any 2D vector graphics based application, you may place and move each of these things manually (typically via mouse input), or you can use tools, filters, and algorithms built into the editing program to move vertices and edges en-masse, which makes the process faster, and can lead to superior outcomes. For instance, Blender has a mirror modifier that can be initialised at a specific position and along a specific axis. This is like Adobe Illustrator's reflect tool, in that it causes the software to create a mirrored copy of all the geometry that you manually create on the input side. However, Blenders modifiers are actually somewhere between this, and the adjustment layers that you can use in Photoshop - the "copied" data is virtual, meaning that the mirrored side will retroactively update as you make adjustments the input side.

Above: Blenders 4-up view helps you position aspects of the geometry correctly. In this particular screenshot, the vertices and lines are hidden, and a smoothing modifier is applied.

The armature

Another concept that has been around in the 3D modelling world for donkeys years is that of the armature. (This is sometimes also known as a Skeleton, a Rig, or a Bone system.) Once you've created a 3D sculpture, integrating it with an armature is the probably the most straight-forward and effective way of giving it the ability to exhibit movement.

In essence, an armature is a series of lines, (aka bones) that the mesh artisan manually places, which roughly conform to the general shape of the subject that they have sculpted. Once created, and linked to the main mesh, whenever a bone is moved, software uses mathmatical algorithms to calculate and appy a transformation over the main geometries vertices, based on the new positions of the bones. In other words, if you move a bone thats positioned within a character's arm 3 units to the right, and all the vertices that make up the arm will move 3 units to the right. In the context of 3D animation, a bone is basically just a dirty great handle that allows you to move a ton of vertices at once.

A more complete character mesh. This time, the lines and vertices are shown. The red lines indicate armature bones.

Here, the bones are shown more prominently. Notice that there are bones for the head, the neck, the eye brows, the cheeks, the top lip, the bottom lip, the nose, the ears, and for each clump of hair.

You might be wondering: "If bones deform a mesh by moving the vertices, doesn't that mean there will be a harsh line between the vertices that are controlled by a bone and the vertices that are not?" As a highly astute reader, you are correct. Or at least, this would be the case, if it were not for bone-to-vertice influence ratios - another well-established 3D modelling convention. Blender has a "Weight painting" mode that allows you to express this influence - the influence that each bone should have on a vertice - by the means of addative and subtractive 3D mark-making.

In the images below, you can see how I've expressed that movements to the nose bone should affect the tip of the nose the most, and the area where the nose joins the plane of the face to a lesser extent. You can also see how movements to the cheek "bone" should affect a pork-chop-shaped region the most, and virtually the whole side of the face, to a lesser extent.

Once you have exported your model from Blender, and imported it into an application-rendering environment, such as threejs or the Unity game engine, the accuracy of the interpretation of your weight-painting is subject to the environment's bone-to-vertice-weight-handling algorithms. That is to say, the mesh that you export just contains weight data - it is the environment software that is responsible for conveying the subtletys you have painstakingly crafted.

Lastly, here we have the mesh texture (below). Essentially, this is what lets you specify which parts of your mesh are which colour. The image on the left is the result of an "Unwrapped" mesh export from Blender. The image on the right, is the same graphic, after being painted on in Krita, a 2D raster-based image editing program. Once its been coloured-in, you can re-import it into Blender, and Blender will re-wrap it around your mesh, giving it bright, brown eyes and rosy cheeks. The painting I've done isn't anything special. Just a bit of flat colour was enough to get the right aesthetic in this case. As you can see, it doesn't matter if you go over the lines and get colour in the "void" areas.

The interior

With the figure complete, it was now time to create a basic 3D backdrop. The image below shows an orthographic projection of the interior that I put together. Its a lot simpler than the character. Just a couple of rooms, some notice boards, and some steps. Notice the 32 poster rectangles that appear in the top right region of the mesh. These are the geometries that will be textured with the images that site visitors upload.

The code

With the two main assets prepared, the time came to get down to some programming. The following list outlines the technologies that the app rests upon. As the project was initially an playful experiemnt, in reality, I did not sit down and create this list before I began writing code. I was a little bit naughty, and let it organically grow, bringing each of these into the mix when a requirement emerged.

  • Threejs: A javascript library and API that is used to create and display animated 3D computer graphics in a web browser. This is actually an abstraction layer that lets you use the WebGL more easily. WebGL is a browser-based implementation of an OpenGL-like API.
  • Tweenjs: A javascript utility that makes it quite easy to calculate the avaraged delta between two specified values, in conjunction with a specified time value. This was ideal to use in the context of the threejs rendering loop. I was also able to cunningly use the TweenJS Bounce.Out timing algorithm to create a great jelly-like wobble effect when a smooshed part of the face was released. (See here for more about tweenjs timing functions.)
  • Laravel: I initially didn't want to use a large server-side framework, but then, my desire for a solid ORM-oriented database layer, and expressive console commands to run tasks that could publish posters at scheduled intervals, delete posters, cancel orders and the like, pulled me towards Laravel.
  • PHP Imagemagick library: This was used to process uploaded images, and superimpose dynamic slot-price costs, which would be read from the database, onto "Your ad here" poster template images.
  • Paypal checkout: A client-side, asynchronous payment gateway system from Paypal. I also used the paypal REST API to trigger server-side actions (storing essential transaction info, such as if it completed successfully, and was paid in full, and publishing a poster image) once it presented valid payment information.
  • Gulp: To compile SCSS, and concatenate javascript. I think that Gulp is a bit old-hat now. A module-loader system such as webpack is a bit better for bundling javascript, and gulp itself is an unnecessary layer of abstraction on top of the already adequately user-friendly nodejs API. I struggle to remember why I chose to use gulp in this project.

The threejs scene

Threejs is the backbone of this application, as it is the bit that lets us render 3D graphics in the browser. Code for a basic threejs scene will do the following:

  • Create an HTML canvas element on the page, into which the 3D graphics will be rendered.
  • Create (or in our case, import) mesh object/s. These are the 'things' in your scene that you will see as walls, people, boxes, cats, etc.
  • Create light object/s, to illuminate the scene, so that you can actually see your mesh objects.
  • Create a renderer object, to render all of the stuff mentioned above.
  • Create a camera object, to let the renderer know the angle and position to render the scene from, along with some other parameters, such as field of vision etc.
  • Create a scene object, to hold and compose all the stuff mentioned above.
  • Set up a rendering loop that will be called over and over again. This is what creates the illusion of motion - its the effect of the threeJS library repeatedly redrawing the scene into the canvas element, dozens of times a second.

The code snippet below is an example of a basic threejs set-up such as the one described above.

Source: https://github.com/mrdoob/three.js/blob/master/examples/webgl_geometry_cube.html

Live example: https://threejs.org/examples/#webgl_geometry_cube

This basic set-up is what underpins the majority of threejs apps. However, the simulation portion of my application needs to do more than just display a rotating cube, so a more structured architecture is necessary. My app would need to:

  • Import previously-created meshes.
  • Set up event listeners to handle user input such as mouse clicks, mouse moves, and touch screen gestures.
  • Detect when a user has clicked somewhere on the screen that intersects with a simulated 3D object, and then accordingly, show a poster or impart change in the character mesh.
  • Detect exactly where on the main character mesh the users click had intersected with.
  • Allow the user gated control over the bones of the character mesh.
  • Listen for certain arrangements of bones to trigger "Achievement unlocked" messages, and record somewhere which achievements had been attained.
  • Give the user control over the camera transformation, but restrict this control to gated rotation transformations only.
  • Be accompanied by an HTML user interface, with buttons and forms that allow users to purchase poster slots.
  • Preload sounds.

Therefore, it was entirely necessary to move away from a single-file-app type architecture. In the context of my goals, this would result in a disorganised, incomprehensible, big ball of code. Therefore, I used a mixture of module patterns and prototypes to afford a reasonably robust, managable architecture. These structural motifs are extremely common in the javascript world. This is how I chose to divide up the client side code:

A few things worth highlighting:

  • Most of the application objects are globally accessible. Up until relatively recently, this was a very common convention in javascript apps. Nowadays, the global scope tends to be protected from pollution a bit more with module loading systems such as AMD, requirejs or similar. Today, one popular implementation of the module loading paradigm exists in the form of Webpack. But of course, just because something is in vogue doesn't mean you should unconditionally use it in everything you do. In this particular case - a small to medium sized app with only about a dozen global identifiers - we have perfectly good code organisation, and module loading would not bring very substantial code-organisation or collision-protection benefits.
  • A mediator object permits loose coupling among modules, and simplifies the relationship each module has with its siblings. In the case of this app, the mediator is mostly used to decouple web components, but it was also used to afford event-based code execution within the app, threeStash, character, and interior objects. Nicholas Zakas gives a great talk on javascript mediator pattern implementations.
  • The ui object is a singleton that takes care of the registering, initialisation, and management of custom web components, which make up the user interface.
  • A javascript object exists for each poster, and for each poster slot. Each of these poster and slot objects are stored within the posterManager object and slotManager object, respectively.
  • The threeStash object contains all the threeJS bits and pieces, including the renderer, the camera, the lights, and so on.
  • The character object represents the character mesh in the simulation, and contains methods that close the eyes, open the eyes, register that a bone has been grabbed, and things like that. The interior object represents the interior mesh, and contains methods to reload poster slot images.

The face-grabbing effect

To the casual face-smoosher, it would seem that the app does a pretty simple thing: I click on the face and I can smoosh it. But in actuality, theres quite a lot going on here.

The key entities that work together to create this effect are:

  • The main character mesh (the pretty mesh that you can see). No surprises here.
  • The bones that control the vertices of the main character mesh.
  • A low-poly hit-detection mesh.

That last item is the one that you perhaps were not expecting. The reason for the inclusion of this will take a little bit of explaining.

ThreeJS comes bundled with utility methods that let you work out which 3D faces within the simulation meshes were effectively 'clicked on' when a mouse down or touch start event happens. It does this by computing how a 2D mouse click event intersects with mesh faces, having used some rather math-heavy algorithms to take into account the position and rotation of the meshes involved, and the position and rotation of the camera. Three lets you know which faces were clicked on by returning to you face IDs. These are simple, arbitrary integer values. Each and every face in a mesh has one.

With this in mind, I came up with a plan for a code execution flow. Having queried the mesh for all its faceIDs, I would manually author a data structure that mapped bones to faces, so that when a user effectively clicked on a face, I could use the face ID to retrieve the corresponding bone name, then the corresponding bone, and then mark that bone as the grabbed one, so that its position could then be adjusted when the mouse is moved.

Now, it would have been a bad idea to use the main character mesh for this traversal, as it contains 10,635 faces. This would mean typing out 10,635 numbers into a javascript object. Therefore, I created a surrogate, simplified version of the mesh. This would occupy the same space as the main mesh, but be invisible. Upon receiving a user click gesture, I would use the faces of this mesh to work out which bone from the main mesh should be selected. I manually decimated the main mesh to create a simplified version that had only 759 faces. As well as permitting a more terse data structure, using a simplified hit detection mesh also brings a code performance benefit. Indeed, it is conventional in games development to use simplified versions of visible meshes for the purposes of hit-detection, for this reason.

For illustrative purposes, here is the the bone-to-face data structure. Its just a basic javascript map object with associative keys. (Please note that I've only listed 4 bones here, and up to 3 faceIDs per bone for the sake of brevity).

Above, from left to right: The main mesh, as it appears when the app is running normally, the main mesh, but when the simplified mesh is also visibly rendered, and lastly, only the simplified mesh.

The code below shows, in more detail, how some of the most relevant parts of the app work together to produce the effect. Note that most of the nitty-gritty code has been removed, in order to more clearly convey the overall concept.

The config file

Most apps benefit from a well-segregated collection of configuration parameters. The configuration file for the app is populated with the code shown below.

When set to true, the debugCamera flag in the configuration file permits the user full control over the orbit controls camera, courtesy of the threejs implementation. This is what allows us to take screenshots such as those shown below.

Web components

As this article is starting to become a bit long, we will wrap up with a brief nod to web components.

The usage of web components can be loosely thought of a type of design pattern that allows the developer to effectively author and manage custom graphical user interface elements - particularly in terms of their behaviours, and no so much their appearance.

A basic web components implementation should have three features:

  • A common HTML template for each component.
  • A javascript class, constructor or prototype that correlates to each HTML template.
  • A system that connects the javascript to the HTML. This act of connecting the two is often referred to as "Mounting the component".

Most, but not all, web component systems will also feature some mechanism that allows the web components to talk to each other. Among the most common are the mediator pattern, and flux and redux architectures.

Most modern javascript user interface libraries are based around the concept of web-components. Angular, React and Vue are among the most common, and none of these are exceptions to this statement.

However, it is possible to create a custom web component system with only 50 lines of code. I decided to go this route for this application, as the UI would be quite simple, and I wouldn't require any advanced features. My recent interest in experienting with my own miniture, native javascript toolsets also compelled me to roll my own. Its quite rudimentary. Theres no way of "unmounting" a component once its been mounted. But still, this is not a requirement for this particular app, so, no harm, no foul.

First, we have a base class, from which all UIComponents will inherit from. There is not a lot of functionality here, but its good to have a place set up for shared functionality to reside, should the need to build it in arise. In establishing a base-class inheritance pattern for our web components, we can also more easily infer if any given variable can be considered a UiComponent.

And below, we have an example of a subclassing component, for a concrete component type. In this case, its the javascript that defines the functionality of the "Achievements" modal HTML. I picked this as an example because its quite simple. It merely houses functionality for adding a class to HTML achievement nodes. It does this when it receives a notification of the correct type from the global mediator object.

And finally, here we have the humbly-named ui object, which is responsible for accepting the registration of each component, and then mounting and initialising all of these components at the right time. A blanket component intitilisation happens in response to a call from the app object, once everything has finished loading, and the app is starting to boot up. Node that the initComponets method is returned as a member of the module API, and that it takes a rootNode argument. This allows modules, within a specified DOM scope, to be initialised at any time, should new ones be appended during any part of the apps lifecycle.

Thats it

And here ends our lightning-fast, whistle-stop tour of the app. If you have questions regarding any particular aspect of it, post a comment and I will fill you in. In the mean time, why not put something up on the bulletin board?

Visit the app online

Notes

1Collada formatted meshes do not embed images, but, as I had hoped, instead reference them with a path that is relative to the collada file itself. Part of the gulp build task I wrote adjusts these paths with no ill effect, which allows me to store the dynamically-generated posters in a location distinct from the location where all the other static texture files for the interior mesh are kept.

2As a library that tends to deal with assets that are large in terms of filesize, threejs doesn't make efforts to prevent the browser from caching textures. However, the classic appended-query-string cache-busting trick can be implemented with a little bit of tweaking to the ThreeJS image loader.

Source: https://github.com/mrdoob/three.js/issues/7207

Credit to mrdoob for the hack. (And the library!)

Comments (0)

Replying to: Noname

Error

Utilising this page as a billboard for marketing purposes is not allowed. Any messages posted by users that appear to be commercial in nature will be deleted, and any user found breaching this term will have their IP address reported to ICANN. This may result in their networks appearing in worldwide electronic communication blacklists.