Tuesday, May 24, 2011

Augmented Mango - SLARToolkit for Windows Phone

The beta of the new Windows Phone Developer Tools was just publicly released. The update with the codename "Mango" comes with many new APIs and will finally contain an API for real-time camera access what a lot of developers have been asking for. The new runtime gives us the needed functionality to implement many cool scenarios. One of these scenarios is Augmented Reality, which leads to my open source Silverlight Augmented Reality Toolkit (SLARToolkit).
This post announces the new Windows Phone version of SLARToolkit and also provides a sample. If you're one of those lucky people with a Mango-enabled device you can download the XAP here or just watch a video instead.

The SLARToolkit project description from the CodePlex site:
SLARToolkit is a flexible marker-based Augmented Reality library for Silverlight and Windows Phone with the aim to make real time Augmented Reality applications with Silverlight as easy and fast as possible. It can be used with Silverlight's Webcam API or with any other CaptureSource, WriteableBitmap or with the Windows Phone's PhotoCamera. SLARTookit is based on the established NyARToolkit and ARToolkit
The sample XAP can be deployed to a Mango-enabled device (tested with build 7629). Alternatively there's also a video of the new sample embedded below.
If you want to try it yourself you need do download the SLAR and / or L marker, print them and point the camera toward these. The marker(s) should be printed non-scaled at the original size (80 x 80 mm) and centered for a small white border. As an alternative it's also possible to open a marker file on a different device and to use the device's screen as marker.
See the SLARToolkit Markers documentation for more details.

I've recorded a short video of the new sample with my Samsung Omnia 7. It's a bit blurry, but it demonstrates how well the sample works even on this quite old ASUS prototype, which's camera pipeline seems a bit slow.
The video is also available at YouTube.

Background music is Melo by Mosaik

This demo shows how the new Windows Phone Mango real-time camera API can be used  to augment the reality with the help of the SLARToolkit. This can be nice for educational projects and it's actually no problem to add correctly transformed videos or other content to the demo.
The demo demonstrates just some basic UIElements like a TextBox and an Image control. Mango will also enable the combination of Silverlight and XNA, which means that nice 3D AR games can be developed with the help of the SLARToolkit. 

How it works
This sample uses the new PhotoCamera and a timer to constantly get a snapshot of the real-time camera stream. This snapshot is then passed to the SLARToolkit algorithms to get the 3D spatial information of the marker. The computed detection results are used to transform the elements perspectively correct.

protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)

   // Initialize the webcam
   photoCamera = new PhotoCamera();
   photoCamera.Initialized += PhotoCameraInitialized;
   isInitialized = false;

   // Fill the Viewport Rectangle with the VideoBrush
   var vidBrush = new VideoBrush();
   Viewport.Fill = vidBrush;

   // Start timer
   dispatcherTimer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(50) };
   dispatcherTimer.Tick += (sender, e1) => Detect();

The PhotoCamera instance is set up in the OnNavigatedTo event handler of the page and the DispatcherTimer is started. The timer will constantly call the Detect method every 50 milliseconds. Additionally a viewfinder Rectangle is filled with a VideoBrush which in turn has the photoCamera video stream set as source.

void Detect()
   if (!isInitialized)

   // Update buffer size
   var pixelWidth = photoCamera.PreviewBufferResolution.Width;
   var pixelHeight = photoCamera.PreviewBufferResolution.Height;
   if (buffer == null || buffer.Length != pixelWidth * pixelHeight)
      buffer = new byte[pixelWidth * pixelHeight];

   // Grab snapshot

   // Detect
   var dr = arDetector.DetectAllMarkers(buffer, pixelWidth, pixelHeight);

   // Calculate the projection matrix
   if (dr.HasResults)
      // Center at origin of the 256x256 controls
      var centerAtOrigin = Matrix3DFactory.CreateTranslation(-128, -128, 0);
      // Swap the y-axis and scale down by half
      var scale = Matrix3DFactory.CreateScale(0.5, -0.5, 0.5);

      // Calculate the complete transformation matrix based on the first detection result
      var world = centerAtOrigin * scale * dr[0].Transformation;

      // Viewport transformation
      var viewport = Matrix3DFactory.CreateViewportTransformation(pixelWidth, pixelHeight);

      // Calculate the final transformation matrix by using the camera projection matrix 
      var m = Matrix3DFactory.CreateViewportProjection(world, Matrix3D.Identity, arDetector.Projection, viewport);

      // Apply the final transformation matrix to the controls
      var matrix3DProjection = new Matrix3DProjection { ProjectionMatrix = m };
      Txt.Projection = matrix3DProjection;
      Img.Projection = matrix3DProjection;

A snapshot of the current preview buffer is taken in the Detect method using the GetPreviewBufferY method. This method fills up a byte buffer with the luminance data of the current viewfinder frame. This buffer is then passed to the SLARToolkit's MarkerDetector Detect method, which returns the detected marker information. This transformation data is then used to transform the UIElement perspectively correct in 3D.
Read more about the PhotoCamera's YCbCr methods in this blog post.

void PhotoCameraInitialized(object sender, CameraOperationCompletedEventArgs e)
   //  Initialize the Detector
   arDetector = new GrayBufferMarkerDetector();

   // Load the marker pattern. It has 16x16 segments and a width of 80 millimeters
   var marker = Marker.LoadFromResource("data/Marker_SLAR_16x16segments_80width.pat", 16, 16, 80);

   // The perspective projection has the near plane at 1 and the far plane at 4000
   arDetector.Initialize(photoCamera.PreviewBufferResolution.Width, photoCamera.PreviewBufferResolution.Height, 1, 4000, marker);

   isInitialized = true;

The SLARToolkit's GrayBufferMarkerDetector is created and set up in the PhotoCamera's Initialized event handler. The brand new GrayBufferMarkerDetector uses the byte buffer with luminance data directly without the need of an ARGB 32 bit pixel conversion.

Checkout the source code at CodePlex if you want to see all the details of the sample which were left out for clarity.

Download it, build your app and augment your reality
The open source SLARToolkit library and all samples are hosted at CodePlex. If you have any comments, questions or suggestions don't hesitate and write a comment, use the Issue Tracker on the CodePlex site or contact me via any other media.
Have fun with the library and please keep me updated if you use it anywhere so I can put a link on the project site.


  1. This is a nice article..
    Its very easy to understand ..
    And this article is using to learn something about it..
    asp.net, c#, javascript
    Thanks a lot..!

  2. Thanks for your very useful article. Please can you tell me in what namespace i can find GrayBufferMarkerDetector class? Thanks in advance!

  3. Hello, I have a question.

    Why do you use the GrayBuffer, class that is not documented on the SLARToolkit website, instead of the CaptureSource?

    Because the GrayBuffer is way less efficient than the other, especially on a white Table.

  4. Please see the answer to your question here: http://slartoolkit.codeplex.com/discussions/361772

    BTW, there's no need for posting the same stuff here and at the CodePlex discussion site (which is a better place for such questions in general).

  5. After a long time looking for GrayBuffer or GrayBufferMarkerDetector, I conclude, that there is no class like GrayBuffer or GrayBufferMarkerDetector. Not in SLARToolkit, not in WP7 API, not in Google...


  6. What't that? ;)

  7. Thanks, but what is wrong with my solution? Take a look:


    Should I recompile library from latest source? Now I'm using recommended binary (version


  8. Yes, please build the lib yourself from the source code.

  9. Dominik I figured out your problem,

    If you download the source code revision (72933) as .zip file from codeplex site, it certainly does contain four files in Detector\Bitmap directory however opening the solution leads that "GrayBufferMarkerDetector.cs" and "GrayBufferReader.cs" are not included in the project. Simply add them and recompile, the code will be

    That's why he couldn't find them.

  10. @Usman: What project / solution are you referring to? I just built the Windows Phone project as checked it and all needed files are included to the project.

  11. Rene, never mind. I was only trying to tell Dominik in my own way that he might be messing with the solution "SLARToolKit" rather than "SLARToolKitWinPhone", lol. You may delete the comments if feel like :)

  12. Rene one question though,

    I was having an impression that .pat file used in the sample are Photoshop pattern file however I am unable to load them in GIMP with the plugin that usually works for loading normal Photoshop pattern files.

    I am done with the code for a Windows Phone app and now actually need to load various different markers (pat) but I am not able to open the pat file included in the source code before I test my custom markers.

    Neither am I able to generate custom markers using http://www.roarmot.co.nz/ar/

    Is there a specific format for these pat files and where can I read about them (hopefully will take care of the rest)

  13. Just open one in a text editor and you will see. I'm sure you can easily create one yourself. ;)
    But this tool can help you: http://flash.tarotaro.org/ar/MarkerGeneratorOnline.swf

  14. I appreciate that however I had been banging ma head against wall for last 3 hours without luck to understand the format properly. Is this an industry standard at all if you may please know? It is certainly not this format pointed in following blog that works with Revvit,


    I am now looking forward to write a desktop app at the first place to generate pat files from normal images and vise versa since having custom pat files is at the heart of any other project anyone has to work at.

    Is this your own format? I did parse the 16 segment pat files in text viewer. I can clearly see 16 columns but how are color values put in those arrays? RGB or Monochrome should I regard them? Seems layers to me sometime.

    Please spare a moment explaining it and I will post the source code for reading pat files for others. Many Thanks

  15. You see why I am confused about layers is that "L" segment in text viewer would have been complete with 16 rows and 16 columns but then not only each sample is repeated thrice yet it is filled with all four rotation angles. That's making me confused as to why...

    I am sorry to be a pain but if the format is not proprietary a blog post on that is worth praising.

  16. Why don't use the online tool to generate the pat file? It always worked nicely for me. I think there's also a desktop tool by ARToolKit.
    I don't have time right now to explain it, but in the source code you will find the .pat reader. See NyARCode.loadARPatt(...)

  17. Could you upload Beginner's Guide ( step by step ) for windows phone 8 app

  18. could a camera detect marker-id or a silverlight 5 toolkit define id for any marker ?
    i ask this question because i don't want to add a lot of marker in my application but i want to my application detect the marker and marker-id, then i will know this marker form id and my application show a special thing that depends on marker-id.

    1. See http://slartoolkit.codeplex.com/discussions/212943

  19. when i open this sample and build it , visual studio 2013 show me this error message :
    "The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Silverlight for Phone\v4.0\Microsoft.Silverlight.WindowsPhone71.Overrides.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. C:\Users\DoOs\Desktop\trunk\SLARToolKit\Source\SLARToolKit\SLARToolKitWinPhone.csproj 224 3 SLARToolKitWinPhoneSample"

    1. Seems like you don't have the WP 7.1 Tooling / SDK installed. Either you install it or upgrade the project to WP 8.0

    2. please reply me , how to add a new marker to this sample because when i add it in "data" from solution explorer , I select add - existing item- and select it, then i write this statement ( var marker2 = Marker.LoadFromResource("data/sa.pat", 16, 16, 80);) , when i run this sample occurs exception and call method ( Application_UnhandledException ) from app.xmal.cs.

    3. In your VS solution explorer, make sure you set Build Action of that marker to Resource.

  20. please, How to show a lot of thing when the camera detect the marker at the same time such that i want to show textblock behind textbox at same time ?

    1. Not sure what you mean, but the z-order of XAML controls is defined by the order how they are written in the XAML markup or the order those are added in code-behind.

  21. This sample contain a many bugs, a camera detect a marker and show TextBox or image but when a camera move from marker, this TextBox remains on the screen on the small size and sometimes TextBox reversed on screen. Are you have a solution for this bugs or no ?

    1. These are not bugs. The sample is kept easy for better readability and not a full blown app. If you want to handle such cases it should be rather easy by setting the Visibility of the controls to Collapsed and only those Visible which have actually a Marker shown. And you also should check for a negative scaling values in the transformation matrix and not apply it if you want to prevent the reversed TextBox.