An unboring.net case study

Case Study: share/two/talk

Creating a multiuser WebRTC WebVR experiment using glTF2.0 for aminated avatars with @takahiro

07 June 2017

The idea "How to create the simplest way to chat with avatars on WebVR"

WebVR development has a lot of constrains but we need to take advantage of the open web APIs like WebRTC or powerful specificationS like glTF2.0 to create different experience like native VR offer us.

This was the basis of my idea to create a simplest way to start a chat on WebVR using a progressive enhancement design strategy as I used on previous works to make this project accessible for as many users as possible.

The mechanic is so simple, enter at talk.unboring.net, copy a link with your roomID, share URL and when your friend will be connected, you can start to talk thanks to the WebRTC protocol.

First sketches made with Blender Just copy / share / talk at talk.unboring.net

As this project is using glTF2.0 format to the 3D content and I needed a real-time network library for Three.js to make it possible, I contacted with Takahiro who was working on both open source projects related with these technologies. And he helps me to use their libraries and to add some features needed to make this project possible.

This video below is a real example of a talk between Japan and Spain with talk.unboring.net. I was connected with a Google Pixel and a Daydream headset/controller, and Takahiro was connected on his Windows based laptop:

Capturing Google Pixel device having a talk from Spain (@arturitu) & Japan (@takahiro)

I encourage you to try it now, because you can access it from a 2d screen (laptop, mobile, table), cardboard and with Daydream:

ENJOY talk.unboring.net and just copy, share and start to talk

Researching on customized WebVR avatars

This project started with a failed attempt to port this old demo from mid 2016 that I have made with customized avatars which have bones and moprh animations. But now this workflow doesn't work, I was frustrated and I tweeted about it, and thanks to this tweet I contacted with Don McCurdy from Google (@donrmccurdy) and Gary Hsu from Microsoft (@bghgary) who help me to find a workflow to export morph animations from Blender to threejs that is explained below.

Original avatars researched with bones and morph animations Original avatars researched with bones and morph animations

ENTER to test this customized avatars OLD demo animated (bones & morph)

In this old project you can change the aspect of the avatars, changing its texture, some of their elements like hair or glasses, and you can animate them with morphing for breath, blink or change the size of the arms, agbomen or legs, and animating bones you can walk or run.

But in addition to the technical limitations, watching the latest videos published related with how to create avatars for VR at Facebook F8 confereces 'Making Facebook Spaces' (highly reconmmended) or at Google I/O when they shown their avatars for Youtube VR I knew that the best option was to start simplifying as much as possible.

The simplest but (I hope) fun option was to create an emoji, I was researching on how is the best way to modelling a face to be possible to make multiple animation to having expressivity. I learned a lot about face loop with videos like this.

At the end, I needed 12 Shape Keys in Blender to be possible to blink, talk, look around and to have the eight basic expressions.

All morphing animations needed to animate our character All morphing animations needed to animate our character

Technical challenges

View the source of the project and glTF2.0 models on GitHub

Using glTF2.0 workflow from Blender to threejs

As I said before, thanks to a tweet and the help of @donrmccurdy and @bghgary I got it a workflow to using Blender and Unity could import glTF2.0 animated models working on a Three.js WebVR project.

glTG2.0 workflow from Blender throught Unity to threejs glTG2.0 workflow from Blender throught Unity to threejs

  • Blender: Create a model and adding Shape Keys (morph animations). Save .blend file into Assets folder of an empty Unity project
  • Unity: Import glTF Tools for Unity and to export it.
  • Threejs: Use glTF2Loader, and the mesh imported will have all its morphTargetInfluences accesible.

This is a provisional solution meanwhile I was waiting for glTF2.0 to be completed, that was two days ago and I suppose that soon we will have an exporter working. In the other side, the glTF2.0 importer for Three.js we are lucky because people like @donrmccurdy or @takahiro was working on it and now is almost completed and working with 86dev version of Three.js.

WebRTC with Firebase network

All this part was possible thanks to the awesome @takahiro project ThreeNetwork a network sync library for Three.js that supports PeerJS, EasyRTC, and Firebase. In our experiment we are using Google Firebase and WebRTC to stream audio between two users connected.

Adding a Daydream mesh and left-handed with Ray-input library

Another challenge was to implement left-handed on Ray input library which is very useful to support any input abstraction for interacting with 3D VR content in the browser. And to add a mesh for the Daydream controller, for this I used the mesh and textures provided on the Google Daydream Elements Unity project and I ported to glTF2.0 to use it here.

Future challenges

This project has started as a side project, but I think it has a lot of possibilities, these are some that come to mind now:

  • New emojis/expressions

    New emojis/expressions

    Add multiple meshes to different emojis and to create fun expressions like 'troll', 'lol' playing with gpu particles to improve the fun effect.

  • Support more devices

    Support more devices

    HTC Vive or Oculus Rift with two hands. And to support GearVR devices with Oculus Browser (we only will need to have access to the microphone soon. And on Safari for Mac and iOS11 is available now with Tech preview app

  • Keyboard to talk

    Keyboard to talk

    If the one of the users don't want to activate the microphone, with SpeechSynthesis gives us the possibility of transform a text received to an audio. Or if both doesn't have microphone active, can receive this text as a captioning.

  • Talk to speech

    Speech to captioning

    A some resemblance to the previous, is if you send and audio and the receiver hasn't activated audio. In this case, with SpeechRecognition we could transform this audio to captioning in almost real time.

  • Play mini games

    Play mini games

    If both users has almost a controller, they could play multiuser simple games with (even) physics.

  • Shared objects

    Shared objects

    Another possibility is to play with shared objects which their position can be modified by both users.

A case study by

Arturo Paracuellos
Creative technologist & 100% of unboring.net
Email | Twitter | Github | LinkedIn

"I am open to collaborating in new projects and to work remotely with companies on real time render projects, especially if they are focused on AR/VR.