The Approaching WebGL Arms Race


The biggest news coming out of the Game Developers Conference (GDC) in San Francisco might be about the next gen Occulus Rift Dev kit or the new Sony “Project Morpheus.” However, the true sneaker surprise is currently nothing more than a footnote in the announcement of the Unity 5 engine. That small but vitally important announcement was the new partnership between Firefox and Unity in creating a plugin free browser experience that uses Unity as the content controller. This is not the only partnership between Firefox and a game engine, Unreal 4 has also been migrated to the browser.


The technology behind this is called ASM.JS, which is a Javascript library with compiler level bindings which allows for C++ level programming. This is a much more powerful way to handle 64-bit data types from the browser by offering a browser-based “LLVM” According to Steven Wittens’ blog, the fundamentals of any great browser driven accelerated graphics service is to have the simplest code possible to handle more data. It’s a simple concept. Less code to compile means more resource available to use on the data, applied to WebGL this means that you don’t create everything in JavaScript. You link Javascript to a deeper language with remote calls use code generators.

Per Atwood’s law, it was inevitable that someone decided the back-end should be JavaScript. Thus was born emscripten, turning C into JS—or indeed, anything native into JS. Because the output is tailored to how JS VMs work, this already gets you pretty far. The trick is that native code manages its own memory, creating a stack and heap. Hence you can output JS that just manipulates pre-allocated typed arrays as much as possible, and minimizes use of the garbage collector. -Steven Wittens

People often miss the implications of a stronger browser integration for AAA level browser content. However, when put into the context of next-gen immersive technologies, you need to consider what types of interactions users will want to experience as they browse the web. Will a static 2D page remain the standard in the future where augmented and virtual reality devices permeate the interaction space? I suspect that 2D browsing won’t vanish, but that 3D web experiences will feel more natural to people from an HCI perspective, for applications where word processing isn’t vitally important.

Firebox VR offers one example of what 3D web browsers might look like. Eventually content will have device detection such that if you visit a site with a VR device. Sites will create scenes which are believable, easy to navigate, and integrated with common 2D content formats. There are a few groups actively preparing for a WebGL driven internet. The most pronounced might be MontageJS, an open source repository maintained by the larger Montage Studio company, which will provide the tool-chain and authoring system for their open interactive site experiences.

Eventually I would like to migrate my site to a host that supports NodeJS instead of Apache+Wordpress. That way I can start demonstrating the interactive web on my site itself. For now though, check out Montage, they allow for Functional Reactive Binding between interactive javascript and HTML5 dom elements. It’s powerful stuff that makes both 2D/UI and 3D/scene pieces work as reusable code.

Creating a heavy client (local content) WebGL system might sound counter-intuitive in this apex age of the “cloud.” However, utilizing local resources rather than a streaming service such as Onlive or nVidia Grid is actually starting to make sense. We live in a world where Moore’s law continues for chip design and parallelism on the GPU, but non-commerical bandwidth provided by ISPs has remained stagnant for the last 5 years. Recently though,  monolithic Comcast has annexed Time Warner, and Verizon has effectively killed net neutrality.

These big self serving monopolies no longer need to innovate on residential speeds, they can instead focus their resourced now on the important business task of killing off all content competitors who are reliant on their services. (Netflix, GoogleTV, Amazon Prime, P2P, maybe all WebRTC.) We are headed into a “cloud-service” dark age. Low bandwidth content won’t suffer, but video and web-gaming is being forced into an arena where either it pays up or it won’t work. This might lift a decade from now if Google fiber or gigabit WiMax/LTE appears for reasonable price… But otherwise, we should settle in because winter is coming.


So why even go for browser 3D in the first place? Browser driven software is OS agnostic, and W3C supportive browsers will all eventually share the same core capabilities. Browser 3D isn’t anything truly new, but web stack agnosticism through HTM5 and Javascript has only been around for a few years. Which means that plugin free WebGL has finally been able to surface. Before that, Adobe Flash was the single proprietary interactive language of choice for over a decade but the mobile space shattered that dream completely. Lack of support for mobile OS and less powerful smartphone hardware meant that alternatives needed to be explored. Whereas we might have expected a competitor to jump in and fill this void, by some miraculous means, the torch was picked up by the open source community.

Google Chrome Experiments and Firefox helped to get some of these initiatives started, but truly, the advent of git & mercurial, and the generosity of superbrains like “Mr. Doob” helped to shape the popular ThreeJS engine. However, despite the “Awesome Factor” of these exciting new technologies, none has truly emerged yet which has the full capabilities of a modern game engine. Unity had its plugin based web player, but it wasn’t the same render environment as the primary engine itself. This new iteration of Unity looks to be essentially the full engine, maybe with lower poly-count.

Here’s the most popular of the WebGL experiments to date:

  • ThreeJS: The favorite of most WebGL devs everywhere. It’s free, open source, well documented, allows for lower language shader integration. The /r/Simulate team used it for our WebHexPlanet app. Everyone loves ThreeJS! COLLADA to JSON exists for loading models but animations are still challenging.
  • BabylonJS: Originally created as  Microsoft Codeplex project, but eventually released under Apache 2.0 license. It handles very similarly to ThreeJS in terms of scene library calls and animation. It’s not been around as long as Three though, so it has less extensions at the moment.
  • Goo Engine: Proprietary software, but has a lot of animation focused libraries. The idea is that Goo would like to be interaction focused instead of scene focused. I imagine only time will tell.
  • SceneJS: The implementation of the SceneJS API includes a scene graph engine, which uses JSON to create and manipulate nodes in the graph. This is similar to the architecture designed by Aaron for MetaSim, which worked on top of ThreeJS.
  • Virtual World Framework: The VWF was founded originally with DoD money, but is now open under Apache license. VWF utilizes NodeJS with web sockets for a messaging layer. There is also an impressive Virtual Sandbox with the inclusion of authoring tools and instance storage. This project is very powerful and probably the most underlooked for its capabilities.
  • More are listed at

Again, Unity5 for the web and Unreal4 are not the same kind  of WebGL (it is still WebGL strictly speaking). Instead, they use the ASM.JS compiled engines which will have all of the scene scripting done in languages other than Javascript. Which improves performance but makes it more challenging for the development of user-specific web delivered content. Unity5 won’t be defining HTML DOM elements like MontageJS does. Javascript-Driven WebGL (maybe call it Web-JSGL) will offer easily reusable web parts which can be edited directly in HTML+JS, versus the LLVM approach (call it Web-LLGL?). Maybe there already exist monikers for these two different styles of WebGL, but I think it is important that the similarities and differences be noted. The “LLGL” approach absolutely will be better at defining very complex scenes and scenarios, something that the “JSGL” will not be able to keep up with. This might not always be true, WebCL could change the circumstances, but for the near future, these similiar but distinct variations of WebGL will come to fulfill very different use cases.

However, this has all been the how and the what, but I need to elaborate on the why. WebGL is important because of the next generation web discussed earlier. Augmented and Virtual Reality hardware is starting to proliferate among consumer devices, such as the newest clash between Sony and Occulus. Whereas the decade of the naught years was focused on the hardware wars, this decade will begin to focus on the peripheral wars. Visual immersion (Occulus), tactile sensing (touch screens), and full body motion (Kinnect) have already become part of the entertainment experience. These technologies are only going to improve as new types of devices enter every 6-18 months. The mobile market is sluggishly toying with Google glass, but once contact lens AR is fully commercialized, it will be difficult for the public to resist the utility of full AR immersion.

DARPA sponsored project by iOptiks
DARPA sponsored project by iOptiks

Which leads us back to WebGL. Once we have undergone what Kurzweil defines as the transition from mobility to ubiquity, the web will not be something that just exists on a pocket device, it will be everywhere. Our world is 3D, so we will need to have fast-deploy web standards which operate in a 3D space. Building this infrastructure on the existing technology of the web will mean that augmented locations can be visited as easily as a web page is today. It will feel more intuitive than reading a four inch screen, and may very well become the most common method of human interaction. Certainly screen resolution has been trending upward much faster recently than it ever has before.


Imagine Skype/Facetime on steroids, cameras and lidar pick up the room around you, generate a 3D model of your friends, and then display them as if they were there with interpolation for smooth animation. All the current signs indicate that this will evolve from web standards, not some other universally compiled set of rendering standards. It will probably even utilize HTTP for asset streaming and latency-agnostic communications.

If the internet fully invades reality, it will need WebGL. A lot of people are excited by Javascript driven WebGL, but the number of participants still pales in comparison to the crowd participating with the Unity tool-chain. These recent developments may either close that gap or obsolete it entirely. Let’s hope for the best!

One thought on “The Approaching WebGL Arms Race

Comments are closed.