Tuesday, September 24, 2013

Ixonos Multi-Window - 2nd generation!

A couple of years ago Ixonos was the first to introduce a comprehensive solution for running multiple applications in multiple windows on Android.

Now, Ixonos engineers have been working tirelessly to make latest Android versions support this feature. The result of this work improves on the original implementation by introducing better performance and some exciting new features.

Ixonos Multi-Window demonstrated on the Nexus 10 tablet running Android 4.2.2.

Among the new features is a really cool "Super Window" mode which puts select applications in a grid layout which can be resized or moved around for easy window navigation.

Stay tuned for more blog entries on this cool technology...

Vasile Popescu, Chief Software Engineer - Ixonos
Mikkel Christensen, Chief Software Engineer - Ixonos
Henrik Kai, Chief Software Engineer - Ixonos

Thursday, September 19, 2013

Ixonos sensact library – sensor/actuator communication made easy!

Fresh out of the Ixonos engineering labs comes the embedded HTML5 industrial prototype that Ixonos engineers are working on. The latest version of this prototype samples sensor data from a Texas Instruments multi-function sensor device http://www.ti.com/tool/boostxl-senshub which is connected via USB – this device includes various sensors (thermometer, compass, gyro, pressure, etc.).

To easily access these sensors we have created a library named “libsensact” which abstracts away the communication channel and input/output details of sensor or actuator devices – each device is simply characterized by it's name and the names of sensor/actuator variables that it provides. The communication channel which is abstracted can be any of USB, I2C, UART, Ethernet, EtherCAT, CAN, ProfiNET, etc.. Though, the first version of the library only supports USB.

The library offers simple connection handling functions and get/set functions for retrieving or setting sensor/actuator variable values of various types (char, int, float, etc.).

For example, retrieving the “temperature” and “pressure” values of the multi-function sensor device named “senshub0” is as simple as described by the following application code:
#include <stdio.h>
#include <unistd.h>
#include "sensact.h" 

#define TIMEOUT 100 // ms

int main(void)
{
  int device;
  float temp;
  float pressure;
  int status;

  device = connect("senshub0");

  status = get_float(device, "temperature", &temp, TIMEOUT);
  if (status < 0)
    printf("Error fetching temperature.");
  else
    printf("Sensor Temperature: %f\n", temp);

  status = get_float(device, "pressure", &pressure, TIMEOUT);
  if (status < 0)
    printf("Error fetching pressure.");
  else
    printf("Sensor Pressure: %f\n", pressure);

  disconnect(device);

  return 0;
}
Also, for each sensor/actuator device, the library contains an entry in a device configuration structure list which contains device specific configurations. For example, the “senshub0” device is configured by the following code:
#include "device.h"

/* List of supported sensor/actuator devices */ 

struct device_t device[] = 
{ 
  { .name = "senshub0", 
    .description = "TI Sensor Hub USB device", 
    .connection = USB, 
    .vid = 0x1CBE, 
    .pid = 0x0003, 
    .endpoint = 0x1 }, 
  { } 
}
For different connection types there will be different “.connection” definitions and subsequently different related dot configuration variables.

Future plans for this library are to add support for more connection types and extend the API with subscriber type functionality so that the user can register callback functions which will be called upon value changes for sensor/actuator devices which are able to support event-driven communication.

Watch out for more blog entries on this library as it evolves...

That's it – keep it simple!

Martin Lund, System Architect - Ixonos

HTML5 Canvas GUIs

The HTML5 Canvas is an excellent graphics platform for creating portable graphics. It supports hardware graphics acceleration, where available, and it doesn't even have to be hard to code.

Recently, we've been building GUIs for embedded devices, and with HTML5 being clearly a technology on the ascendancy, we wanted to use web technologies for our embedded demonstrator. Obviously performance was a question, so we decided to work entirely on the HTML5 Canvas element, as used in many graphics-intensive web games.

So how does a HTML5 Canvas approach compare to the more usual approach. Let me show you a short example, where first we create an app using HTML / CSS /JS - the traditional way. The other example is a similar app, built on the GUI library we created on HTML5, in pure JavaScript.

The app simply animates five icons across the screen, repeating infinitely.
The first part of any HTML app is the HTML. Here we declare the image items that we will animate

<body onload="someFunction();"> <img height="64" id="anima" src="png/a.png" width="64" /> <img height="64" id="animb" src="png/b.png" width="64" /> <img height="64" id="animc" src="png/c.png" width="64" /> <img height="64" id="animd" src="png/d.png" width="64" /> <img height="64" id="anime" src="png/e.png" width="64" /> </body>
Then the CSS. We define the animation that will be shared by all img elements
@-webkit-keyframes move
{
   0% { -webkit-transform: translateX(0px); }
   100% { -webkit-transform: translateX(500px); }
}

.anim
{
 -webkit-animation-name: move;
 -webkit-animation-duration: 5s;
 -webkit-animation-timing-function: linear;
 -webkit-animation-iteration-count: infinite;
 -webkit-animation-fill-mode: forwards;
 position: absolute;
}

Finally, the JavaScript part, where we first find the img elements we wish to animate, then assign the style we need.

function someFunction()
{
 items.push( document.getElementById('anima') );
 items.push( document.getElementById('animb') );
 items.push( document.getElementById('animc') );
 items.push( document.getElementById('animd') );
 items.push( document.getElementById('anime') );

 for( var i = 0; i < 5; ++i )
 {
  items[i].className = items[i].className + " anim";
  items[i].style.top = ""+(70 * i)+"px";
  items[i].style.left = "10px";
 }
}

As you can see, this involves three snippets of code. Now, HTML5 Canvas GUI coding doesn't have to be hard. Not when there is a bit background work done. Check this out.

function initApplication()
{
 var btns = [];
 var btn_paths = ["a.png", "b.png", "c.png", "d.png", "e.png" ];

 for( var i = 0; i < btn_paths.length; ++i )
 {
  var btn = new ItemDecorator( 0,i*70, 64, 64, btn_paths[i] );
  addItem( btn );
  var anim = new AnimMove( btn, 500, i*70, 5000 );
  anim.repeatForever();
 }
}

Here we've created the exact same animation using a decorator item in our widget library. As you can see, the amount of code is less than a third of the traditional way. And the code is very easy to read.

That was a quick look into full-canvas web GUIs. Stay tuned for examples and more.

Mikael Laine, SW Specialist - Ixonos

Friday, September 6, 2013

Ixonos Embedded HTML5 Demonstrator

Recently, Ixonos has been working to demonstrate the feasibility of using HTML5 as a graphical user interface (GUI) platform on embedded devices.

A GUI needs to not only look good, but also to work well. A button press and consequent feedback must accurately reflect the internal state, and latency should be kept to a minimum overall. Broken abstractions abound when graphics design and technology implementation exist in separate silos - that's why our designers work hand-in-hand with developers. You cannot create a successful GUI by just thinking of it as a set of screenshots - it's also how it works!

Our demonstrator prototype embodies this principle, and provides an excellent open standards based platform for developing and demonstrating our capabilities as a design and software development "one stop" house.

The prototype runs on the Texas Instruments AM3359 Evaluation Module (EVM) connected to a multi-function sensor device represented by a TI Tiva C Series LaunchPad Evaluation Kit in combination with the TI Sensor Hub Booster Pack. Basically this particular prototype involves an ARM Cortex-A8 processor running at 720MHz and the InvenSense MPU-9150 MEMS chip (gyro, accelerometer, compass) combined with a Cortex-M4 microcontroller with USB connectivity. The purpose is to demonstrate a complete embedded system, from sensor I/O, to middleware and GUI.

The complete software stack consists of the following:
  • The Ixonos Embedded Linux BSP (Base Support Package)
  • Qt 5.1 framework
  • QWebView based web programming environment
  • Ixonos HTML5 canvas GUI libraries and web app
  • Ixonos data server, providing the web GUI with constant sensor updates via WebSockets
  • Ixonos "senact" library which provides an API for handling sensor and actuator communication
In the video below you can see one view from the demonstrator, involving the accelerometer and compass attached to a HTML5 Canvas widget. Enjoy.

One view from our HTML5 Demonstrator Prototype - accelerometer attached to the embedded TI AM3359 board, with an HTML5 Canvas-based compass widget.

From a business perspective, one needs to understand how to harness the relevant aspects of hardware and software for the task at hand. This is where Ixonos comes into play. We’ve created a low-footprint Linux platform, which runs a Webkit-based HTML5 runtime, with a plugin architecture that provides native functionality to portable web apps. Our motto is: "dream design deliver", and we live up to it through our interdisciplinary working mode, where low-level technology implementation meets high-level graphics design vision.

This is the first installment of our HTML5 prototype demonstrating the basics. In the future, more sensors/actuators will be added and our UX designers will add beautiful graphics to make it really shine.

Stay tuned for more blog entries on future versions of this prototype and the technologies behind it...

Martin Lund, System Architect - Ixonos
Mikael Laine, SW Specialist - Ixonos

Introducing the Tech blog, and Reminiscing on an Exciting Tech Past

First post! You have found your way to the Ixonos Tech blog, for which there has been great demand for a long time, and which is finally here. We work on really cool stuff at Ixonos you know, and that is why we eventually decided to open this window for our engineers and the internet audience to meet. As we begin, I’d like to reminisce on previous tech blogging ventures at Ixonos. The year was 2009, and a major internal project was underway. The publicity portion was a failure, but the subject matter was awe-inspiring. You can read some blog posts from that project here: secretlinuxmobile.blogspot.com. This project was done together with Fjord, a design company, and it was an awesome ride! The purpose was to build a complete Linux-based mobile phone software stack, spanning telephony integration using ofono, all the way to XML-based IPC mechanisms and a GUI framework based on Qt4. Back then we were still learning how to build graphical interfaces in an iterative agile working mode – together with graphics designers(!) – and learn we did!

Some of the key findings can be summed up by saying that good communication is 50% of a successful project of this kind. The importance of communication simply cannot be overemphasized.

The second key finding was that developer team members needed to be geared towards graphics design. Some were, some weren't. The words of my Scrumm Master trainer Jens Ostergaard come to mind: if there is a problem with the team, you may need to break the team up and build a new one - the heart of Scrum is the team!

Now years (and several GUI-related projects) later, the early learning experiences still resonate with a powerful and relevant message. Managing Agile (Scrum) teams needs to be a sober and serious undertaking, and when mixed with the challenge of having designers in the team just makes project management aspect more important. "Agile" doesn't mean easier, and it certainly doesn't mean a happy-go-lucky attitude of "let's see". Rather, the basic framework needs to be robust and rigid as ever. More on this later.

I have seen projects with "traditional" here-is-the-design-go-do-it approach, but more and more the industry takes flexible, agile working modes as a given. A software project then becomes a roller-coaster ride of constantly changing design specifications, that never are finalized, but nevertheless must be implemented! At best, this is super exciting and gives developers a chance to free their "inner graphics designer" and enjoy the creative flow. In a worst case this means a complete breakdown of understanding between designers who expect the impossible from software implementers, and developers who eventually grow bitter towards the fleeting designers who seem to be from a another planet.

And this can take serious forms. From developers who stop coming to work or who spend their days drinking coffee and watching YouTube videos, to designers who grow distant and don't even try to deliver that other version of a graphics asset, as requested by the ever annoying developer who seems to live in a cage of do's and don'ts of the technical world.

Solving real-life project management issues in design-oriented projects starts with a good project backlog. This is the most important place where designers, developers and product owner(s) can come together and speak the same language. If a product needs to have a clock, that is a goal everyone must understand. Other considerations are then the responsibility of respective teams or persons: the business perspective of what a user does with a clock, the developer perspective of which ntp system, caching mechanism and timezone setting to use, or the designer perspective of what the clock looks like. The whole team needs to be able to come together in a fireside chat sort of fashion just to get to know each other and get excited about the project goals.

A properly prioritized product backlog is not just an absolutely required prerequisite of going ahead with the development process in sprints, but also represents the most concrete projection of the project's vision. This is the place where every team member should feel excited and start dreaming about the project's final results: the product. Such team spirit-lifting is easy to overlook, but its impact is huge. The heart of Scrum is the team, and the team needs to feel they are a part of the greater project goals. This is the foundation of commitment and getting the best performance out of each individual.

Finally, to summarize, let me list a few guidelines from my experience on how to get a designer-engineer co-op project going smoothly. This is not comprehensive, but hopefully relevant to your particular needs today:

  • Start by bringing everyone together around the product backlog, or if the backlog doesn't exist, involve everyone in the initial "casting the vision" phase. Everyone must feel included and important, otherwise a downhill spiral of resentments, internal hierarchies and bitterness begins.
  • Create an environment of communication. I didn't say documentation, I said communication. And give freedom. Use Skype, Hipchat, Trello, Messenger / Lync, IRC, gmail chat etc. - do not stand in the way of whatever feels most comfortable for the team in the way of communication. As I said earlier, the importance of communication cannot be overstated. Big problems become small problems when you have help close at hand.
  • Carefully build the product backlog, and revisit the priorities often during the first sprints of the project. In real life, priorities tend to be affected not only by the business value of features, but also by the technical effort (cost). As you involve developers and designers in this, you will learn what is easy for designers may be the hardest of all for developers. This is a highly iterative process, but eventually forms the backbone of a good project.
  • Give power to the team. The developer team must have the power to decide how much is enough per sprint. And be careful to balance freedom with control, when inevitably the design team starts to request "extra" favors from the developers: this should be taken as a good sign that designers and developers are working together, but it is also a possible catastrophe brewing that results in the priorities being broken and work slowed down overall. This brings us to the final point...
  • Decide early on in the project if designers and developers are one team or two teams. A highly iterative prototyping project will benefit from developers and designers sitting side-by-side with both instructing each other in what is possible the most effective way of working. But you can forget about development product and sprint backlogs when this happens. Some of the funnest projects I've been involved in have been of this kind, but if project goals are crystal clear at the beginning, you really want to maximise effectivenes and not go down this road. Only do this if design is completely unfinished for most of the project's duration. The important point to remember, is that if developers and designers are to be separate teams, then all the insulation principles of Scrum should apply to designers too, and that means no extra favors can be expected from the developers, and communication should be restricted to guiding implementation, not prototyping fun design ideas.

At the end of the day, we are (still) at the beginning of writing the folklore of the creative development process, and as such one should have a brave mind eager to try out new things. Technology has just recently started to reached the level where we are taking functionality for granted, and now also want a matching easy-to-use user interface. Design and Development is blending into a high-level creative playfulness, and project management needs to learn a bunch of new tricks.

Mikael Laine, SW Specialist - Ixonos

How to patch X Org for Touch Screens


There is an annoying bug in the touch event handling in xserver-xorg-core which affects all Linux distributions, where if you use transformation matrices (like below), only some of the events are transformed. This causes pointer down events to be transformed, and pointer up events to not, which looks like the cursor is constantly jumping from the "correct" transformed location to the untransformed location. This problem occurs often to anyone trying to rotate a touchscreen to portrait, for example. Its mentioned on the internet in various places.
An example of what you might want to do. In this example, a touchscreen is rotated to portrait, and is scaled down a bit. This is correct, but the transformations fail for some events because of the bug.



sudo xrandr --output VGA1 --rotate right --right-of LVDS1 --scale 0.66667x0.66667
sudo xinput set-prop "QUANTA OpticalTouchScreen" --type=float "Coordinate Transformation Matrix" 0 0.36 0.64 -1 0 1 0 0 1


How to fix The fix has been created, and it is floating around on the internet. Here is the patch that I found, from my Google Drive so the link doesn't vanish : x_touches.patch

How to apply the patch I've created a script to apply the patch, because you have to do this every time you upgrade your system's xserver (which is pretty often using the normal updates). Please read the script and apply to your situation, or simply save the script as a file (say, 'patch_x.sh') and run it in some 'temp' directory where you have the patch file too (must be named x_touches.patch).
Running the script involves calling 'chmod 755 patch_x.sh' first, to make it runnable, then saying './patch_x.sh'. I hope this helps someone!



#Script for patching and building xserver-xorg-core for touch screens
if [ ! -n "$1" ]
then
dpkg -s xserver-xorg-core;
echo "PLEASE CHECK EXACT VERSION NUMBER FROM THE ABOVE LISTING (e.g. 2:1.13.0-0ubuntu6.1),";
echo "THEN RUN THIS SCRIPT AGAIN WITH THE VERSION IN QUOTES";
else
echo "Getting and building xserver-xorg-core, version $1"
sudo apt-get update;
sudo apt-get install fakeroot;
sudo apt-get build-dep xserver-xorg-core;
sudo apt-get source xserver-xorg-core=$1;
cd xorg-server-*;
sudo patch -p1 -i ../x_touches.patch;
sudo dpkg-buildpackage -rfakeroot -us -uc -b
cd ..;
sudo dpkg -i xserver-xorg-core_*.deb
fi


Mikael Laine, SW Specialist - Ixonos