Sunday, May 15, 2016

HoloTouring - Upcoming Talks about HoloLens Development

During the next weeks I will not only be busy working on a bunch of HoloLens projects but also sharing some of the knowledge about HoloLens development.
I will be speaking at 3 European conferences starting end of May until early June including Unity's own Unite conference.
My talk will cover how to develop Mixed Reality apps for the HoloLens using Unity. I will cover an overview about VR, AR/MR, Unity, HoloLens and of course share best practices from all the months doing HoloLens development. Half of the talk will be dedicated demo time showing how to get started with HoloLens Unity development. We will build a little app that shows how to implement the foundation of every HoloLens experience: GGV input paradigm (Gaze, Gesture, Voice), Spatial Sound and Spatial Mapping. Altogether in a fun physics-based demo app.

May 26 - 27, 2016
Stockholm, Sweden
My session is on May 26th, 17:20 in Room C

Unite Europe
May 30 - June 2, 2016
Amsterdam, Netherlands
My session is on June 1st, 11:30 in the Keynote Track

NDC Oslo
June 6 - 10 2016
Oslo, Norway
My session is on Thursday June 9th, 11:40 in Room 4

Here's a sneak peek of the final demo app we are building in the session.

Now who's going to clean up all that #HolographicLitter in my house. Well, bloom gesture FTW!

Friday, April 1, 2016

HoloFlight - Content for the HoloLens Build 2016 Presentation

This year I had a chance to give a presentation at Microsoft's largest developer conference /build 2016. It was actually my 3rd year in a row, now with 5 sessions delivered at /build. The last one was likely the best one so far.

In the presentation we showed a HoloLens app I have been working on since the last couple of months at IdentityMine. We talked about the challenges faced, how those were solved and shared some further best practices with the audience.

The slide deck can be viewed and downloaded here:

A live recording of the presentation is available here.
A studio recording with better audio will be made available via the HoloLens Academy soon. The link will be posted once it's live.

Top 10 HoloLens Development Recommendations

Once you have the HoloLens SDK installed, watched some tutorials and got started, you might be interested in some best practices and recommendations. IdentityMine has been among a handful of agencies who were able to develop for HoloLens since a while. We worked on several projects, learned a variety of things and of course ran into some pit falls.

In this blog post I want to share my Top 10 HoloLens dev recommendations which crystalized during the time I have spent developing for HoloLens. Most of the best practices were also shared during our presentation at /build. The recommendations are mostly generic and apply to the Direct3D development route but some are more specific to Unity-based development. At IdentityMine we leverage the great productivity of Unity 3D but also have our own DirectX-based engine running on HoloLens.

  1. Find a great 3D artist and designer

    You can be a great developer but the best dev skills will not help you to make a nice design or great 3D models. You can buy 3D models online at various sites like the Unity Asset Store or Turbosquid. Usually those models are not optimized for real-time rendering and have a too high polygon count or just do not fit your needs. Having a great 3D artist at hand who can produce outstanding 3D models with a reasonable polygon count, has strong UV mapping and texturing skills is very important, otherwise there is just nothing to render!

  2. Brush up on your 3D math skills

    If you want to build 3D apps or games you need to have a basic understanding of certain 3D principles like what a matrix transformation is and how those affect each other when combined. Those are usually covered in high school during linear algebra math classes. Unity 3D and other 3D middleware already implement the required functions so you do not need to write your own Vector cross product, Quaternion methods, etc. However, you still need to know how to apply them.

  3. HoloLens is a mobile device, so treat it like one

    HoloLens is fully untethered which means all the computing happens inside the device, there are no cables and you can freely walk around. This is a huge advantage compared to other AR or VR devices requiring a PC for the actual computing. On the other hand, this also limits the computing power as you cannot expect your full desktop PC graphics card power. Still your app needs to run at 60 fps to ensure a high hologram stability. Unstable holograms and rendering at 30 fps or less can make users feel sick, cause nausea and headaches, so maintaining 60 fps is crucial.

    The GPU of HoloLens is mostly fill-rate bound, which means it can easily render tens of thousands of triangles but if lots of pixels are drawn with complex processing for each, it can bring the GPU to its knees. Therefore, the advice is to:

    • Use simple pixel shaders and not the Unity Standard Shader, for example. Some models even look reasonable good with vertex lighting and do not require pixel-based lighting. So you can get away with a very lightweight pixel shader.
      The HoloToolkit ships with some optimized Unity shaders you can use out-of-the-box or use as a starting point for your own shader development.
    • Be smart about screen space effects and do not use them. Do not bother with Screen Space Ambient Occlusion or similar effects like Depth of Field, outline shading, etc. It is just too expensive to render those.

  4. Take advantage of the multicore CPU

    The CPU of HoloLens has multiple cores, so multi-threading and parallelization make sense to some extent. In our apps, we put heavy processing in a background thread so the UI thread is free and will not be blocked by longer running synchronous computing. Strong candidates for background processing are network requests, JSON parsing and heavier algorithmic processing in general.
    Note that the Unity’s Editor uses the Mono .NET runtime and not the UWP/WSA .NET Core runtime shipping with Windows 10 on HoloLens. Therefore, you will have to implement an #if UNITY_EDITOR code path using Mono’s ThreadPool and one #else code path with Task.Run or similar.

  5. Anchor your holograms in the real world and make them stable

    HoloLens is quite clever in maintaining the positions of your virtual objects (holograms) in the real world but if you walk a bit further away or change the room, the objects start to drift. The HoloLens APIs provide a WorldAnchor which you can apply to your objects, the HoloLens runtime will then make sure WorldAnchor’d objects do not drift and adjust their positions in case.
    A best practice is to group virtual objects belonging together and place a WorldAnchor on this group when the group’s position is set for the scene. The members of this hologram group should not be too far away from each other (3m). WorldAnchors are even persisted so the objects will be at the same position even if the app is closed or the device rebooted. Furthermore, WorldAnchors can be shared with multiple HoloLens devices for collaboration scenarios.

    Another important API to make holograms more stable is the Focus Point. You should set this point to each frame to tell the HoloLens runtime which position in the world which space should be the most stable in your scene. Additionally to a position, you can also provide a plane normal to define a stabilization plane. If you have multiple holograms they should intersect with this stabilization plane. For moving objects, the linear velocity can be provided to the Focus Point API as well.

    There are many other things to consider if you want to provide hologram stability with your experience and you really want to have your experience rock solid stable. There is a great write-up in the HoloLens docs about hologram stability every developer should read.

  6. Leverage Level of Detail for smaller objects

    When you are rendering smaller objects at a certain distance they will use only a few pixels in the final rendering. Sub-pixel triangles are not only a waste of processing but are also hard to see for the user. One technique to solve this is Level of Detail (LOD) where different versions of a model are used at different distances from the camera (user head).

    This can also be applied to the collision detection used for gazing. Gazing means rotating the camera (head) and setting the focus with this. Knowing which object is being gazed-at requires ray casting and collision detection. A virtual ray is shot from the camera and where the ray collides with an object, is the object the user is focusing at.
    It can be hard to set the gaze cursor at an object, which is rendered very small since there is always a slight head movement. One way to work around this could be to use a larger collision volume for the object which basically increases the collision hit surface making gazing easier. This can also be made dynamic and combined with LOD, so the larger collision volume is only used from a certain distance or the volume size is dynamically adapted.

  7. Pay attention to the Gaze Cursor

    Many HoloLens apps show the Gaze cursor most of the time, so make sure that it works very well and reacts quickly to the user's feedback.
    The gazing depends on head rotation, which is measured using an Inertial Measurement Unit (IMU). Like every sensor in the world, the raw IMU data contains noise. The IMU noise means the gaze target is jitter is ng causing an undesired slight movement of the Gaze Cursor. To avoid this, the raw sensor data is filtered and post-processed to smooth out the signal noise. There are many smoothing algorithms available ranging from simple low-pass filters to more complex like Double Exponential or Kalman filters. The HoloToolkit already includes a smoothing algorithm you can use or modify. Regardless of which filter you choose, just make sure to use one and to smooth the gaze input.
    Another important feedback for the user is to show the hand-ready state and indicate that the HoloLens can see the hand and is able to recognize the performed gestures.

  8. Animate transformation changes

    Objects in the real world do not just appear or disappear, maybe only ghosts (if you believe in such things). In the real world, objects move in, grow in size or shrink, etc. It is easy though for virtual object to be just enabled or disabled. If you want to make a great Mixed Reality experience you want your virtual objects to behave like real world objects. This means fading and transition between states is crucial for a great UX.
    For example, watch this video where a model of all Hawaiian Islands is swapped out with a model of just one of those islands. It’s breaking the experience without any transitions. Now compare it with this video where the models are scaled and cross faded. Much better.

  9. Avoid the long deployment cycle, leverage Unity’s main benefit

    One of Unity’s main benefits is the WYSIWYG game view where one can change parameters on-the-fly and see the results immediately. It is a huge development performance gain. Deploying from Unity to the HoloLens device or even the emulator takes its time. First you need to create the package from within Unity’s Build Settings, then you build the created Visual Studio solution and wait until the app package was deployed. Having to wait every single time when you want to test simple gaze-gesture interactions will kill the productivity. I have written a script which simulates gazing and some gestures with the mouse and keyboard, allowing the basic testing inside Unity’s game view. This has proven to be a huge timesaver already.
    Of course you should test on a real device in the end. Nothing comes as close to the device as the device itself, this is even more relevant for HoloLens with its unique Holographic Frame size, spatial mapping and mobile performance.

  10. Reuse Unity script code with a custom plug-in assembly

    When you have multiple projects you usually also have some script code pieces that are reused across the projects. The naïve approach would be to copy those script files between the project’s Assets folders which quickly causes maintenance issues when you have to make changes to all of the copies. A better approach is to have a central C# project containing all the reusable code files. The built assembly DLL is then just copied into the Unity project’s Assets\Plugins folder. There’s just a small important difference: The Unity Editor uses the Mono .NET runtime and not the UWP/WSA .NET runtime shipping with Windows 10 on HoloLens, therefore you have to build the assembly for both targets and apply some special settings in the Unity inspector.

Hopefully these 10 recommendations for HoloLens development will help you to avoid some pitfalls and increase your productivity for creating awesome HoloLens experiences!

Wednesday, March 30, 2016

Holo Dev 101 - Getting started with HoloLens Development

The HoloLens SDK was just made available during Microsoft’s largest developer conference /build in San Francisco.
Without a doubt, the HoloLens is the most impressive device in the market in recent history. It is mind blowing and revolutionary but it’s also surprising how the development experience is quite familiar for many developers compared to how different the actual user experience is.

The first dev kit devices are also shipping to a few lucky developers but you can actually start developing for HoloLens even without a device. The emulator works very well; it nicely emulates gazing, the most important gesture, even the speech recognition engine and spatial sound are emulated very well.

Here are the main points to help you to get started developing for HoloLens:

  1. Make sure you have a Windows 10 machine matching these requirements.

  2. Download the SDK from the official site and follow the installation steps.

  3. Decide which development route you want to take. You can either:

    1. Build or reuse your (XAML/HTML) UWP apps. Those will run as 2D apps in a hosted window but even some HoloLens-specific APIs can be used from an UWP app.

    2. Build holographic apps augmenting the real world with 3D content.
      There are two development approaches for holographic 3D apps:

      1. Unity 3D: The HoloLens SDK integrates with Unity 5 and it’s a great way to get started quickly and build some great Mixed Reality experiences.

      2. DirectX: HoloLens is a Windows 10 device and you can render 3D content using DirectX. DX is a great way if you have a custom 3D engine you want to leverage. You can use C++ or even C# via SharpDX. Visual Studio includes templates for both.

      It’s also worth mentioning 2D apps can work with multiple windows at the same time while 3D apps run in an exclusive full-screen mode only allowing one window at a time. Honestly, 3D is king on a device rendering stereoscopic 3D into the real world, so you probably want to go beyond 2D UWP apps.
      At IdentityMine we leverage the great productivity of Unity 3D but also have our own DirectX-based engine running on HoloLens.

  4. Take some time, watch the great tutorials and also find help from other developers.

  5. Build awesome apps and release them in the Windows Store!

Thursday, December 31, 2015

Keep on Climbing - Recap of 2015

It's the last day of 2015 and therefore time for the annual recap of the year. For me it was a great year, on the private side of life as well as the professional life.

I'm still working from home for one of the best interactive agencies in the world with a group of fantastic people. I'm happy that I got back into computer graphics programming this year being able to work in the emerging space of virtual reality with devices like the Oculus Rift, Google Cardboard, Samsung Gear VR and more. Plus the next chapter which is being opened with new innovations in the field of Augmented and Mixed Reality.
I was also able to do a few talks at conferences and honored to speak at the //build developer conference again this year, this time together with my good friend and colleague Laurent.
Oh, and I got promoted to Lead UX Developer as well in 2015.

It was also a great year in my private life. My wife and I had our 10 year wedding anniversary and I love her more every day. Our four daughters are doing fine in (pre-)school and bring us joy every day.
For me personally it was an intense change too when I decided in February 2015 that it's finally time to loose some weight. By now I lost about 35 kilograms (77 lbs) by decreasing the energy input with (unhealthy) food and increasing the energy output with cycling. I likely turned a few more kilograms from fat into muscles through the training as well. I did some sweet bike rides like a nice 113 km tour around Lake Washington and Lake Sammamish in October when I was in the Seattle area and another 100 km+ ride a few days ago climbing into the Erzgebirge (Ore Mountains). In December alone I managed to ride more than 500 km while climbing 5200 meters up. Glad I reached this point considering I had 35 kg more to carry when I started in February.

For the next year I want to reduce my weight just a bit more and I'm also planning to ride my first Mountainbike Cross Country Marathon, the famous Erzgebirge Bike Marathon and some other nice challenges.
In the tech field I hope to be renewed as Windows Platform Dev MVP and being able to work more in the field of VR, AR and MR. I have the feeling it's another UX revolution happening right now. It's an exciting time to be a developer for sure.

Bring it on 2016!