So I skipped the first couple of sessions today. I’m in San Francisco and I need to live a little. I mean they’re all on Channel 9, right?
What did I do instead, you ask? Oh, nothing much. Just a 4 hour HoloLens development session! For the record I did not squeal with delight when I got the confirmation email. I wanted to, but I did a chair dance in the middle of a session. There were some worried looks from other attendees.
So, the HoloLens. In a word: Wow. It really is a whole new method of computing. It’s inherently different from anything else I’ve experienced (that includes VR), but developing for it is very straight forward. It must be stated: If you are familiar with Unity (or other 3D development platform) you should be smiling right now, otherwise, it’s time to learn. For our session we used Unity for the 3D elements, and C# for scripting. Other platforms will be supported (this was promised) and there will be more compatible scripting languages.
So, here’s the short story: If you can create a 3D game, with scripting, in Unity, you can be a HoloLens developer. Additionally, any Window 10 UAP application will run on the HoloLens. They’re calling them Universal (or Windows, we got some mixed language at Build) for a reason. Longer version: You will need to make some considerations for user experience on the HoloLens.
There’s a series of new gestures to consider, like gaze, voice, and “air tapping”. It’s all incredibly intuitive for your user, but a developer needs to be aware of it. For instance, your eyes aren’t the cursor, it’s more so where you nose is pointing. This is a VERY important distinction. Also, failing to support a gesture type (using only voice, not air tapping) will be disorienting for the user, as will using the wrong input type in a given situation. This is not the platform for someone to just throw together an app. Additionally, you control the origin point, but by default it is the user’s head. Be aware of this for two reasons:
1. Getting too close to a hologram causes clipping.
2. If your holograms are statically positioned, moving your head below the hologram reveals the underside. Skipping the appropriate textures will break immersion.
So let’s talk hardware and usage experience. I wear glasses and have a few eye problems, so vertigo is often a concern for me with VR and such. I had absolutely no vertigo with the unit, with and without my glasses. My unit was defective and stopped working part way through the session, and the glitches I had on the first headset couldn’t be reproduced on the second. This gave me a great deal of faith in the hardware, but I’m hoping that it is very robust once it hits production. People will bump into things, knock the HoloLens onto the floor, and if it can’t withstand the abuse it’ll be like a cell phone with a broken screen. I fully expect this to be a fairly expensive piece of tech and so replacing it won’t be an option for most people.
Now for the x-factors. We could ask HoloLens developers anything, but a lot of stuff they simply couldn’t/wouldn’t tell us. For instance, no answer was given about the resolution range of the device. The images looked great, really almost as good as a 3D game on my laptop (mid-range graphics), but the “screen” size was a bit small. If you have a 6″ cell phone, hold it out about 3 inches from your face. The usable screen space on HoloLens was around that big. Maybe a bit smaller or larger. I asked if full screen, or even resizing, will be included and was met with cagey replies. My gut feeling is that it will be (Windows 10 has resizing built in) but I’m worried about how it will affect resolution. Time will tell.
Lots of people have been making a lot of noise about computing and development changing. Frankly, they’re right. It’s only a question of the degree. The possibilities are nigh endless with HoloLens, and I am very excited to add it to my skill set.