Friday, October 25, 2013

Build Gear version 0.9.19beta released!

A new version of Build Gear has recently been released.

A lightweight embedded firmware build tool

Build Gear is the open source build tool that is used to build the Ixonos Embedded Linux BSP for various embedded boards based on a range of different chipsets including TI OMAP/AM/DM, Freescale IMX, Intel Atom/Haswell, etc.. This build tool allows us to very effectively create and maintain clean cut modern Linux BSP firmware tailored to fulfil the requirements of individual embedded customers.

This release includes a couple of new features and some bug fixes.

One of the new interesting features is the introduction of a new command to create a software manifest which provides a detailed list of the software components involved in a particular build. This is a quite useful feature in case you need an overview of the licenses of the components going into your firmware. Actually, for most this is an important feature so that the BSP firmware can be legally approved before going to production.

For more details see the release announcement here

The Build Gear tool has been in beta stage for quite some time but it has now stabilized to the point where it is ready to move out of beta. Thus, it will soon be labelled stable and a 1.0 release will mark the final transition out of beta.

Expect more posts from me on this build tool and on how and why we use it to create the Ixonos Embedded Linux BSP platform solution.

Keep it simple!

Wednesday, October 16, 2013

Intel Perceptual Computing

Do you remember Tom Cruise's Minority Report directed by Steven Spielberg? With that fancy user interface Tom used when searching people from crime database?

Well, it's here now. Not 100%, but getting closer to that.

Intel published Perceptual Computing SDK 2012. SDK is free, all you need is a 149$ camera provided by Creative Technology Ltd, a development environment like Visual Studio and a bit passion to create cool software for creating greatest user experiences ever.

With the Intel Perceptual SDK, you can detect few hand gestures like "peace" sign, hand movements, fingers, swipes, it has depth information that tells how far your hand is from the camera. It detects faces, recognizes voice commands etc. The most used development environment is Visual Studio C++, but you can do your things also with C# or Unity game development tool.


Detecting gestures and face

Some common questions I've been asked about this:
1. Is is stable?
-Pretty much, but I would not attend as a patient to a surgical operation, if the doctor is using this remotely.
And the license strongly advised not to use it in any critical systems, like car driving, controlling aeroplanes etc. 
Damn - I was just about to connect this with F-18C Hornet!

2. How much it costs?
The sdk is free, you need a 149 USD camera manufactured by Creative Labs and development environment. And some time. Not that much, if you’re familiar with Microsoft Visual Studio tools, but you’ll get started pretty fast. The cam itself looks pretty ok, it’s a lot heavier than they usually are. Maybe it tells about the quality, or just because the heavier cam stays easily at the top of the monitor(!)

3. Is there any useful apps developed for this?
Check out Intel's Perceptual Computing Challenge results from
http://software.intel.com/sites/campaigns/perceptualshowcase/

4. What kind of data you can get from this camera?
You get actual image frame, recognized gestures, depth data, hand coordinates from high level services provided by intel SDK etc. Also you’ll get also the raw data, if you wish to do some image and gesture processing by your self. And there are some voice recognition stuff.


The camera at the top of the monitor

Here is some C# code for gesture detection. The cam recognizes few gestures like hand waving, “peace”-sign, etc. I used it to control Windows 8 desktop.


public MyPipeline(Form1 parent, PictureBox recipient)
{
lastProcessedBitmap = new Bitmap(640, 480);
this.recipient = recipient;
this.parent = parent;
// setting up some features
attributeProfile = new PXCMFaceAnalysis.Attribute.ProfileInfo();
EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_RGB24);
EnableFaceLocation();
EnableFaceLandmark();
EnableGesture();
}
// when there will be a gesture, this is called
public override void OnGesture(ref PXCMGesture.Gesture gesture)
{
switch (gesture.label)
{
case (PXCMGesture.Gesture.Label.LABEL_POSE_BIG5):
if (sameCommandDelay != null && sameCommandDelay.AddSeconds(COMMANDELAYINSECONDS) < DateTime.Now)
{ // avoid too many commands -problem…
sameCommandDelay = DateTime.Now;
InputSimulator.SimulateKeyPress(VirtualKeyCode.LWIN);
}
break;
case (PXCMGesture.Gesture.Label.LABEL_HAND_CIRCLE):
base.Dispose();
//parent.Close();
//Application.ExitThread();
break;
case (PXCMGesture.Gesture.Label.LABEL_POSE_THUMB_UP):
if (sameCommandDelay != null && sameCommandDelay.AddSeconds(COMMANDELAYINSECONDS) < DateTime.Now)
{
sameCommandDelay = DateTime.Now;
VirtualMouse.LeftClick();
}
break;


Depth data, c++ demo from Intel

Friday, October 11, 2013

Sweet and tasty approach to OpenCV and MinnowBoard

PC-esque cheap hardware is booming, and there seems to be no limit to the cool apps you can create on boards like Beaglebone, Rasperry Pi, or Minnowboard.

This obvious trend has had our attention for a long time now, plus we've got some customer cases going with the basic idea of migrating from expensive legacy systems to cheap off-the-shelf processing boards with huge capabilities in a meak form-factor.

Recently, some of our clients have expressed their interest in imaging systems, so we decided to whip up a small demo involving our "Ixonos BSP" small-footprint Linux distro and the industry standard OpenCV imaging library.

In this demo we used the MinnowBoard, Intel's small and low cost board which is based on Atom processor. The camera we used is a basic USB webcam from Logitech. Pictures below:

The Minnowboard with webcam watching candy drops

The camera setup allows the system to see some candy drops in this rather trivial pattern recognition demonstrator. The system acquires image rasters of the scene using v4l2 and OpenCV. Circle shaped patterns are detected using opencv function "HoughCircles", based on Hough Circle Transform. Code snippet below demonstrates simple circle detection using HoughCircles:
//circle detection
vector<vec3f> circles;
HoughCircles(detected_edges, circles, CV_HOUGH_GRADIENT,
             1, minSizeThreshold,  lowThreshold, lowThreshold/2,
             minSizeThreshold, minSizeThreshold + minSizeThreshold / 2);

printf("total circle count: %d\n",  circles.size());
After detecting all the circles, they are categorized according to color and statistics are printed to the screen.

Candy drops detected
 
Another picture below illustrates a situation with some more candy drops.
More candy drops detected


Ilkka Aulomaa, SW Engineer - Ixonos
Kalle Lampila, SW Engineer - Ixonos