Friday, May 1, 2015

Magic Moments - Recap of the Holographic Academy


It's April the 30th afternoon and now 24 hours ago when I was lucky enough to be part of the first group of external developers worldwide who received a Holographic Academy training, so getting in-depth hands on HoloLens prototype hardware and actually deploying apps on it during a 4h course.
There was no NDA I had to sign so here's a quick recap. I actually double checked with some of the Microsoft mentors there that no NDA was in place.
The also released a video in the meantime where I have my 3 seconds of fame as well.

tl;dr
The HoloLens is the most impressive device I've ever tried. It's mind blowing. The Holographic academy workshop was fun and accessible for all kinds of developers.

Setting
This event was happening at a hotel nearby the //build conference center. The HoloLens team basically got a whole floor there. They had multiple security checks and lots of guards standing everywhere. I had to lock my stuff including my smartphone, so no photos or notes allowed but I got some nice HoloLens swag in exchange. After passing the 2nd or 3rd security checkpoint a team member measured my IPD (65.5 mm), then I passed another guard and finally was escorted into a fancy room with a few dozen attendees, lots of cameras, many HoloLens team members who helped the attendees as mentors, development PC setups and most importantly lots of HoloLens prototype devices (HL).

Hardware
The HL prototypes already looked very polished and have the computing units integrated. Still, the devices we tried had a clear laser printed label on it: "Prototype" and "Not FCC approved...".
After adjusting the HL to the head, it's quite comfortably to wear, not too heavy nor has any annoying fan noise. Unlike other devices does the HL also work well when the user is wearing glasses, so good for people like me.
So far there are only a few official pieces of information about the hardware available and no further information was shared at the event even though I kept asking the answer was always: "We are not ready to talk about that today." I heard that often but I was allowed to take an in-depth look at the devices. The following is based on my personal analysis so take it with a grain of salt.
There are at least 5 cameras on the front. 1 in the center and 2 on each side where 1 of the 2 is pointing to the side and the other one to the front. I suspect those 4 are the depth cameras. Covering 2 of those 4 with the hands still didn't break the recognition which was impressive. It was hard to judge the minimum and maximum distance it supports but I'd say it was around 0.5 - 5m or just like Kinect v2 in the range of 0.8 - 4m.
The projection screens for the left and right eye are not just flat 2D displays but actually kind of layered screens. I spotted 3 layers for each eye and it seemed like each layer is either for the red, green or blue component. These special screens are key for seamlessly merging the virtual objects into the real-world environment and that's the reason for calling the HoloLens a holographic device. The actual screens were a bit small on the HL prototypes they had available, so the field of view was narrower than expected but it's likely the next generation and consumer devices will improve this. The front also had a few tiny holes which I believe are microphones for the amazing speech recognition and sound analysis. For audio output there are small speakers in the headset band. They provide an immersive spatial sound I've never experienced before. More about that below.
The device is charged and connected to the PC for deploying via Micro USB. There seems to be a small special motherboard connector at the top of the front as well.

Walkthrough
The actual academy was using Unity to create a 3D scene. From within Unity a Visual Studio UAP solution was generated which was then deployed to the HL. Ctrl+F5 onto HoloLens! W00t!
I heard there are different tutorials for the academy. My group was doing the Origami tutorial which means the base was a 3D scene with various Origami paper-like 3D models or Holograms as the presenters called it.
The first chapter covered how to setup the device and running a little demo app where a virtual RC truck could be driven and pushed through the real-world room. On the floor, on the table, on the sofa, on humans, drive the virtual truck everywhere in the room!
The second chapter walked us through the process of getting the base Unity Origami scene deployed onto the device and exploring it from different angles in the room and collide with real-world objects.
The next step involved the addition of the Unity rigid body physics to the virtual 3D objects and adding a virtual indication where one is looking at while rotating the head. It was basically a torus which was projected onto the scenery on top of virtual and real-world objects. This is called Gaze. We also added an Air Tap gesture where one is tapping the index finger into the air. The gesture together with the raycasting of Gaze tells you what object in 3D space was selected and basically works like a spatial touch tap / mouse click interaction trigger. The Gaze is the mouse cursor movement in 3D space if you will and the air tap the mouse click. In the case of the Origami demo it triggered a physics action to let Origami paper balls fall down. All this dynamic functionality was implemented using Unity C# scripts.
After that, speech recognition was added into a C# script and it was very impressive. I just had to add an ordinary string like "Reset Scene" into a script without any pre-defined grammar and it just worked. And my HL device even worked only for my voice which means my HL did not react on the same voice commands another attendee was saying.
Later, spatial audio was added which means playing sounds in the 3D virtual environment for the 3D object interactions. This was mind blowing! These small speakers make the sound appear spatially from the scenery and around it. It's totally different than just headphones. You actually think other people hear the same in the room but they are not.
Then we added so called spatial mapping which means overlaying real-world objects with a virtual layer / net. In the academy we overlaid it just with wireframe polygons but trying it on the HL is a very impressive experience.
In the last chapter we removed parts of the real-world floor by overlaying it with a virtual 3D animation which showed a crack in the floor where Origami birds were flying in the underground. The screens of the HL are bright and saturated enough that it actually overlays the real-world without transparency and one could walk around and look from different angles into the underground. Mind blown again!

Further thoughts
There were more chapters / steps in-between for the Origami tutorial or in different order but I think I covered the keys of it above. I've never really used Unity before and I think it was a good choice by the HoloLens team to use Unity for this //build Holographic Academy as it's accessible for many developers who have never done any 3D computer graphics before and it quickly provides nice results. For myself I'd rather have preferred to get my hands on the underlying Direct3D surface and render my own 3D content to it. Honestly I was a bit bored by Unity so I added my own custom speech commands for more physics fun and explored the real HoloLens API with Visual Studio's Object Browser on the side and found some cool stuff in there.
It is great the HoloLens team lets developers get their hands on real device development less than 100 days after showing the first prototypes to the public. Kudos!

Final words
The HoloLens is the most impressive Augmented Reality experience I've ever tried. It's real, not just smoke and mirrors! There are so many innovations in it beside the seamlessly merging of virtual and real-world with the layered holographic screens: spatial audio, perfect speech recognition, spatial gesture recognition, low latency real-time mapping of the real-world environment and all that fully integrated into a single head mounted device without any external cables needed.
24h after attending the academy my mind is still blown and I can't wait to get my hands dirty on a dev device at home pushing it to the limits with the real SDK and Direct3D.
Exciting times!

Tuesday, March 31, 2015

Staying Alive! - WriteableBitmapEx 1.5 is out

After a couple of minor updates on top of version 1.0 which lead to 1.0.14, I'm happy to announce that WriteableBitmapEx 1.5 is now available.
Many contributions were integrated and lots of bugs fixed. Among those are some nice new color modifications and also long awaited DrawLine with variable thickness, pen support, better anti-aliased lines, Cohen-Sutherland line clipping, even-odd polygon filling and alpha-blended shape filling... Read the details at the end of this post or the release notes.

WriteableBitmapEx supports a variety of Windows platforms and versions: WPF, Silverlight, Windows 10 Universal App Platform (UAP), Windows 8/8.1, Windows Phone WinRT and Silverlight 7/8/8.1.


You can download the binaries here or via the NuGet package. The packages contain the WriteableBitmapEx binaries. All samples and the source code can be found in the repository.

A big thank you to all the contributors, bug reporters and users of the library who helped to shape this. You rock!

Changes
  • Added lots of contributions including DrawLine with variable thickness, penning, improved anti-aliasing and Wu's anti-aliasing algorithm
  • Added usage of CohenSutherland line clipping for DrawLineAa and DrawLine, etc.
  • Added support for alpha blended filled shapes and adapted the FillSample for WPF
  • Added FillPolygonsEvenOdd() which uses the even-odd algorithm to fill complex polygons with more than one closed outline like for the letter O
  • Added AdjustBrightness(), AdjustContrast() and AdjustGamma() methods
  • Added Gray() method which returns the gray scaled version the bitmap
  • Fixed regression issue with alpha blending for Blit for non-WinRT
  • Fixed bug in Blit Alpha code for WPF when source format is not pre-multiplied alpha
  • Fixed bug #21778 where FromStream for WPF needs to be called inside Init scope
  • Fixed issue with IndexOutOfRangeEx in DrawLine method
  • Fixed Invalidate for Silverlight BitmapContext.Dispose
  • Fixed many more reported issues
  • ...

Friday, December 19, 2014

The Walking HeadlessHost - WP 8.1 Feedback Feature and Zombies

Photo by Juanedc

I was recently working on a rather large Windows Phone app project and came across an issue which took many head scratching hours to figure out. Therefore here's a short heads up if you work on Windows Phone apps with background agents and have those running on Windows Phone 8.1 devices: The WP 8.1's Feedback setting can influence how your background process behaves.

We are talking about a WP 8.0 app with an UI (TaskHost.exe) and a VoIP background agent (HeadlessHost.exe) communicating with each other through an out-of-process RPC server. This approach follows the WP 8.0 VoIP app architecture. When this 8.0 app runs on 8.1 devices I've seen sporadic issues where an instance of the HeadlessHost background process was kept alive when the app was deactivated, for example with fast app switching. When the app get's activated again another instance of the HeadlessHost is started while the previous zombie HeadlessHost is still not dead, so you will end up with two HeadlessHost processes which leads to issues like endless "Resuming..." screens and app crashes when navigating back. This can be seen even without a debugger attached.


If you disable the WP 8.1 device's Feedback setting that issue is gone and the ForegroundLifetimeAgent's HeadlessHost is quickly exited as expected when the UI gets deactivated.
I'm not sure why the Feedback feature is related, my best guess right now: The WP 8.1 Feedback feature gathers telemetry data and therefore keeps the HeadlessHost undead. The weird part is that the HeadlessHost is even undead after a minute and no crash of the background agent happened which means no Watson data should actually have been collected at all or at least not that long.

I'm not sure yet if Silverlight 8.1 apps on 8.1 devices are affected as well or if it's only an issue with 8.0 apps on 8.1 developer unlocked devices. Nevertheless I still wanted to share my experience since it's a rather unknown case and not much information can be found anywhere else so far.

Beyond Kinect for Windows v2 - KinectEx

Kinect for Windows (K4W) and the Kinect v2 sensor are amazing for NUI development and I'm glad that we at IdentityMine have advanced projects where we can leverage this technology.
The K4W SDK already provides lots of nice APIs and a ton of samples to get started. Like usual, there's always more and an SDK can not cover everything but if the official APIs are well built, the open source community is always happy to provide functionality beyond the standard SDK.
One very good open source K4W project is KinectEx which provides some really cool features one often needs when working with K4W v2. KinectEx brings Kinect data recording and playback APIs, so it can be integrated in your own apps, multiple skeleton data smoothing algorithms and much more.
I've contributed a few changes to the project and the KinectEx author is really open for pull requests. Funnily does KinectEx also use one of my open source projects WriteableBitmapEx.

I just wanted to share this very nice open source project and spread the word.
You should definitely look into KinectEx if you are doing any K4W v2 development.

Friday, November 14, 2014

Native Android Application Development with C++ and Visual Studio 2015

Visual Studio 2015 ships the new C++ cross platform support which currently provides native C++ library compilation for Android and Windows platforms. iOS compiler support is supposed to be added as well.
That's not all. VS 2015 even provides a new project type called "Native-Activity Application" for Android app development. This pure C/C++ Android application model is mainly used for game development using full screen OpenGL rendering.
The Native-Activity Application project type allows developers to write native Android apps with C++ and even to run and debug those in the Android emulator only by using Visual Studio 2015.
No Xamarin, no Apache Cordova needed, just VS 2015 and pure C/C++.



How it works

It's pretty straightforward to get started.
Just install Visual Studio 2015 including its Secondary installer which adds the required Android development tools.
Start VS 2015 and select the Visual C++ -> Cross Platform -> Native-Activity Application (Android) project type:


The generated template code provides the base native Android app code including simple code for cycling the back buffer clear color inside the engine_draw_frame function.

For this sample here I decided to make the Hello World of 3D computer graphics: a rainbow colored rotating cube. Here's how you can achieve this as well:

Add the cube definitions to the beginning of the main.cpp:

static GLint vertices[][3] =
{
 { -0x10000, -0x10000, -0x10000 },
 { 0x10000, -0x10000, -0x10000 },
 { 0x10000,  0x10000, -0x10000 },
 { -0x10000,  0x10000, -0x10000 },
 { -0x10000, -0x10000,  0x10000 },
 { 0x10000, -0x10000,  0x10000 },
 { 0x10000,  0x10000,  0x10000 },
 { -0x10000,  0x10000,  0x10000 }
};

static GLint colors[][4] =
{
 { 0x00000, 0x00000, 0x00000, 0x10000 },
 { 0x10000, 0x00000, 0x00000, 0x10000 },
 { 0x10000, 0x10000, 0x00000, 0x10000 },
 { 0x00000, 0x10000, 0x00000, 0x10000 },
 { 0x00000, 0x00000, 0x10000, 0x10000 },
 { 0x10000, 0x00000, 0x10000, 0x10000 },
 { 0x10000, 0x10000, 0x10000, 0x10000 },
 { 0x00000, 0x10000, 0x10000, 0x10000 }
};

GLubyte indices[] = {
 0, 4, 5,    0, 5, 1,
 1, 5, 6,    1, 6, 2,
 2, 6, 7,    2, 7, 3,
 3, 7, 4,    3, 4, 0,
 4, 7, 6,    4, 6, 5,
 3, 0, 1,    3, 1, 2
};


And replace the // Initialize GL State statements at the end of the engine_init_display function with this:

 // Initialize GL state.
 glDisable(GL_DITHER);
 glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
 glClearColor(1.0f, 0.41f, 0.71f, 1.0f); // Hot pink! :D
 glEnable(GL_CULL_FACE);
 glShadeModel(GL_SMOOTH);
 glEnable(GL_DEPTH_TEST);
 glViewport(0, 0, w, h);
 GLfloat ratio = (GLfloat)w / h;
 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glFrustumf(-ratio, ratio, -1, 1, 1, 10);


Finally replace the engine_draw_frame function:

static void engine_draw_frame(struct engine* engine) {
 if (engine->display == NULL) {
  // No display.
  return;
 }

 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();
 glTranslatef(0, 0, -3.0f);
 glRotatef(engine->state.angle * 0.25f, 1, 0, 0);  // X
 glRotatef(engine->state.angle, 0, 1, 0);          // Y

 glEnableClientState(GL_VERTEX_ARRAY);
 glEnableClientState(GL_COLOR_ARRAY);

 glFrontFace(GL_CW);
 glVertexPointer(3, GL_FIXED, 0, vertices);
 glColorPointer(4, GL_FIXED, 0, colors);
 glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices);
 
 eglSwapBuffers(engine->display, engine->surface);
}

You also might want to modify the angle increment so the cube rotates a bit faster, like engine.state.angle += 1.f;

Make sure the project is set to compile as x86 in order to use VS' own Android emulator.
Then F5 and enjoy the magic of the rotating cube.


Now try to add a breakpoint in your engine_draw_frame function. Yes, there's native debugging support for Android apps in Visual Studio 2015. Times are definitely changing at Microsoft.


Sample code

You can download the complete VS 2015 sample solution here.


More resources

This blog post is a nice overview of the new Visual C++ cross platform features.
This short Ch9 video gives a quick intro to the new Visual Studio 2015 C++ cross platform support.
NeHe is a quite old resource for OpenGL tutorials but mostly still useful.