Friday, November 29, 2013

Interactive Embedded Touch GUIs with HTML5

Recently we've been considering graphical user interfaces (GUIs) from the point of view of a systems integrator. There are several thing to consider, when creating a complete solution, such as: several different software platforms ( embedded devices, phones, tablets, desktop computers, ...), data network considerations and future proofing.

Several technical solutions come to the rescue here. Firstly, there are standards that span several (all involved) platforms and allow software development to be done once - with perhaps some adaptation for each platform. Secondly, networks of all sizes and shapes allow for powerful distributed systems, where data can be shared and interaction happens across the room or from the other side of the globe.

The Ixonos Embedded HTML5 library - ixgui.js - has proven to be a highly flexible and scalable platform for creating embedded GUIs. Recently, a number of system topologies have been explored using ixgui.js, involving running the GUI as detached from the embedded device. HTML5 obviously fits natively into this kind of distributed environment. The GUI can be hosted on the cloud, on an embedded device or basically anywhere.

Sensor data sharing in our demonstrator is fascilitated using the Ixonos sensact library, which you can read about in an earlier blog post.

The user interface for this demo is simple. It displays data coming in from the TI Sensor Hub Booster Pack. In addition, there is an RPM display and setting slider, but that is only for show: there is no motor in this version - but in later ones there will. below is a screenshot of the GUI:

Simple Touch Interface using ixgui.js
The below video illustrates using this GUI on the Texas Instruments AM3359 Evaluation Module with a separate, more elaborate, GUI running on a detached display.

ixgui.js is a HTML5-based GUI library, which allows performance optimized GUI creation by using the Canvas 2D interface for fast graphics and fine control over what is drawn at a given time. It is designed around the principles of simplicity, performance, standards compliance and programmer friendliness.

This article outlines some key methods for improving Canvas 2D performance. It has been extremely gratifying to fine-tune drawing for ixgui.js, and indeed we implement optimization on several levels.

On the top level, rendering is optimized by only drawing what needs to be redrawn. For most GUIs, only when items are interacted with, do they require to be redrawn.

Pre-rendering: often a large part of an item is static, and actually requires no update at all during the entire lifecycle of an application. In these cases, we can simply pre-render those areas that don't change onto a separate buffer, and reuse that for each redraw. As an example, see the below picture for how the vertical sliders in the demo are drawn:

Finally, at the lowest level possible (in JavaScript), some optimization is achieved by only feeding integer values to drawing routines. All coordinates and dimensions throughout the GUI are cast to integers.

Mikael Laine, SW Specialist - Ixonos

Friday, November 8, 2013

Ixonos Multi-Display for Android

Ixonos enables its Multi-Display feature for recent generation Android - making multi-tasking easy.

Have you ever tried using the Android secondary display API's (described here) that was introduced in Android Jellybean 4.2? Using the "Presentation" class to show content from your app is quite cool, but you are still limited to to run only one activity at a time. Basically you launch a Dialog (Presentation) to the secondary display from your activity running on the main display. This is useful for certain types of apps like e.g image and Powerpoint presenters, but what about running the stock Android Browser on one display and watching YouTube on the other?

Watch the video below and see what Ixonos has created to enable true multitasking for multiple displays.


This technology is a generic solution that enables the user to run existing Android applications on either display and also to map external input devices to the given display. Also, it utilizes the new Android display manager service and is thus display agnostic, meaning that we can use any type of display, eg. HDMI or Miracast. The Multi-Display feature can be integrated with recent generation Androids (4.2, 4.3, 4.4) by our engineers.


Vasile Popescu, Chief Software Engineer - Ixonos
Mikkel Christensen, Chief Software Engineer - Ixonos
Henrik Kai, Chief Software Engineer - Ixonos 

Friday, October 25, 2013

Build Gear version 0.9.19beta released!

A new version of Build Gear has recently been released.

A lightweight embedded firmware build tool

Build Gear is the open source build tool that is used to build the Ixonos Embedded Linux BSP for various embedded boards based on a range of different chipsets including TI OMAP/AM/DM, Freescale IMX, Intel Atom/Haswell, etc.. This build tool allows us to very effectively create and maintain clean cut modern Linux BSP firmware tailored to fulfil the requirements of individual embedded customers.

This release includes a couple of new features and some bug fixes.

One of the new interesting features is the introduction of a new command to create a software manifest which provides a detailed list of the software components involved in a particular build. This is a quite useful feature in case you need an overview of the licenses of the components going into your firmware. Actually, for most this is an important feature so that the BSP firmware can be legally approved before going to production.

For more details see the release announcement here

The Build Gear tool has been in beta stage for quite some time but it has now stabilized to the point where it is ready to move out of beta. Thus, it will soon be labelled stable and a 1.0 release will mark the final transition out of beta.

Expect more posts from me on this build tool and on how and why we use it to create the Ixonos Embedded Linux BSP platform solution.

Keep it simple!

Wednesday, October 16, 2013

Intel Perceptual Computing

Do you remember Tom Cruise's Minority Report directed by Steven Spielberg? With that fancy user interface Tom used when searching people from crime database?

Well, it's here now. Not 100%, but getting closer to that.

Intel published Perceptual Computing SDK 2012. SDK is free, all you need is a 149$ camera provided by Creative Technology Ltd, a development environment like Visual Studio and a bit passion to create cool software for creating greatest user experiences ever.

With the Intel Perceptual SDK, you can detect few hand gestures like "peace" sign, hand movements, fingers, swipes, it has depth information that tells how far your hand is from the camera. It detects faces, recognizes voice commands etc. The most used development environment is Visual Studio C++, but you can do your things also with C# or Unity game development tool.


Detecting gestures and face

Some common questions I've been asked about this:
1. Is is stable?
-Pretty much, but I would not attend as a patient to a surgical operation, if the doctor is using this remotely.
And the license strongly advised not to use it in any critical systems, like car driving, controlling aeroplanes etc. 
Damn - I was just about to connect this with F-18C Hornet!

2. How much it costs?
The sdk is free, you need a 149 USD camera manufactured by Creative Labs and development environment. And some time. Not that much, if you’re familiar with Microsoft Visual Studio tools, but you’ll get started pretty fast. The cam itself looks pretty ok, it’s a lot heavier than they usually are. Maybe it tells about the quality, or just because the heavier cam stays easily at the top of the monitor(!)

3. Is there any useful apps developed for this?
Check out Intel's Perceptual Computing Challenge results from
http://software.intel.com/sites/campaigns/perceptualshowcase/

4. What kind of data you can get from this camera?
You get actual image frame, recognized gestures, depth data, hand coordinates from high level services provided by intel SDK etc. Also you’ll get also the raw data, if you wish to do some image and gesture processing by your self. And there are some voice recognition stuff.


The camera at the top of the monitor

Here is some C# code for gesture detection. The cam recognizes few gestures like hand waving, “peace”-sign, etc. I used it to control Windows 8 desktop.


public MyPipeline(Form1 parent, PictureBox recipient)
{
lastProcessedBitmap = new Bitmap(640, 480);
this.recipient = recipient;
this.parent = parent;
// setting up some features
attributeProfile = new PXCMFaceAnalysis.Attribute.ProfileInfo();
EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_RGB24);
EnableFaceLocation();
EnableFaceLandmark();
EnableGesture();
}
// when there will be a gesture, this is called
public override void OnGesture(ref PXCMGesture.Gesture gesture)
{
switch (gesture.label)
{
case (PXCMGesture.Gesture.Label.LABEL_POSE_BIG5):
if (sameCommandDelay != null && sameCommandDelay.AddSeconds(COMMANDELAYINSECONDS) < DateTime.Now)
{ // avoid too many commands -problem…
sameCommandDelay = DateTime.Now;
InputSimulator.SimulateKeyPress(VirtualKeyCode.LWIN);
}
break;
case (PXCMGesture.Gesture.Label.LABEL_HAND_CIRCLE):
base.Dispose();
//parent.Close();
//Application.ExitThread();
break;
case (PXCMGesture.Gesture.Label.LABEL_POSE_THUMB_UP):
if (sameCommandDelay != null && sameCommandDelay.AddSeconds(COMMANDELAYINSECONDS) < DateTime.Now)
{
sameCommandDelay = DateTime.Now;
VirtualMouse.LeftClick();
}
break;


Depth data, c++ demo from Intel

Friday, October 11, 2013

Sweet and tasty approach to OpenCV and MinnowBoard

PC-esque cheap hardware is booming, and there seems to be no limit to the cool apps you can create on boards like Beaglebone, Rasperry Pi, or Minnowboard.

This obvious trend has had our attention for a long time now, plus we've got some customer cases going with the basic idea of migrating from expensive legacy systems to cheap off-the-shelf processing boards with huge capabilities in a meak form-factor.

Recently, some of our clients have expressed their interest in imaging systems, so we decided to whip up a small demo involving our "Ixonos BSP" small-footprint Linux distro and the industry standard OpenCV imaging library.

In this demo we used the MinnowBoard, Intel's small and low cost board which is based on Atom processor. The camera we used is a basic USB webcam from Logitech. Pictures below:

The Minnowboard with webcam watching candy drops

The camera setup allows the system to see some candy drops in this rather trivial pattern recognition demonstrator. The system acquires image rasters of the scene using v4l2 and OpenCV. Circle shaped patterns are detected using opencv function "HoughCircles", based on Hough Circle Transform. Code snippet below demonstrates simple circle detection using HoughCircles:
//circle detection
vector<vec3f> circles;
HoughCircles(detected_edges, circles, CV_HOUGH_GRADIENT,
             1, minSizeThreshold,  lowThreshold, lowThreshold/2,
             minSizeThreshold, minSizeThreshold + minSizeThreshold / 2);

printf("total circle count: %d\n",  circles.size());
After detecting all the circles, they are categorized according to color and statistics are printed to the screen.

Candy drops detected
 
Another picture below illustrates a situation with some more candy drops.
More candy drops detected


Ilkka Aulomaa, SW Engineer - Ixonos
Kalle Lampila, SW Engineer - Ixonos

Tuesday, September 24, 2013

Ixonos Multi-Window - 2nd generation!

A couple of years ago Ixonos was the first to introduce a comprehensive solution for running multiple applications in multiple windows on Android.

Now, Ixonos engineers have been working tirelessly to make latest Android versions support this feature. The result of this work improves on the original implementation by introducing better performance and some exciting new features.

Ixonos Multi-Window demonstrated on the Nexus 10 tablet running Android 4.2.2.

Among the new features is a really cool "Super Window" mode which puts select applications in a grid layout which can be resized or moved around for easy window navigation.

Stay tuned for more blog entries on this cool technology...

Vasile Popescu, Chief Software Engineer - Ixonos
Mikkel Christensen, Chief Software Engineer - Ixonos
Henrik Kai, Chief Software Engineer - Ixonos

Thursday, September 19, 2013

Ixonos sensact library – sensor/actuator communication made easy!

Fresh out of the Ixonos engineering labs comes the embedded HTML5 industrial prototype that Ixonos engineers are working on. The latest version of this prototype samples sensor data from a Texas Instruments multi-function sensor device http://www.ti.com/tool/boostxl-senshub which is connected via USB – this device includes various sensors (thermometer, compass, gyro, pressure, etc.).

To easily access these sensors we have created a library named “libsensact” which abstracts away the communication channel and input/output details of sensor or actuator devices – each device is simply characterized by it's name and the names of sensor/actuator variables that it provides. The communication channel which is abstracted can be any of USB, I2C, UART, Ethernet, EtherCAT, CAN, ProfiNET, etc.. Though, the first version of the library only supports USB.

The library offers simple connection handling functions and get/set functions for retrieving or setting sensor/actuator variable values of various types (char, int, float, etc.).

For example, retrieving the “temperature” and “pressure” values of the multi-function sensor device named “senshub0” is as simple as described by the following application code:
#include <stdio.h>
#include <unistd.h>
#include "sensact.h" 

#define TIMEOUT 100 // ms

int main(void)
{
  int device;
  float temp;
  float pressure;
  int status;

  device = connect("senshub0");

  status = get_float(device, "temperature", &temp, TIMEOUT);
  if (status < 0)
    printf("Error fetching temperature.");
  else
    printf("Sensor Temperature: %f\n", temp);

  status = get_float(device, "pressure", &pressure, TIMEOUT);
  if (status < 0)
    printf("Error fetching pressure.");
  else
    printf("Sensor Pressure: %f\n", pressure);

  disconnect(device);

  return 0;
}
Also, for each sensor/actuator device, the library contains an entry in a device configuration structure list which contains device specific configurations. For example, the “senshub0” device is configured by the following code:
#include "device.h"

/* List of supported sensor/actuator devices */ 

struct device_t device[] = 
{ 
  { .name = "senshub0", 
    .description = "TI Sensor Hub USB device", 
    .connection = USB, 
    .vid = 0x1CBE, 
    .pid = 0x0003, 
    .endpoint = 0x1 }, 
  { } 
}
For different connection types there will be different “.connection” definitions and subsequently different related dot configuration variables.

Future plans for this library are to add support for more connection types and extend the API with subscriber type functionality so that the user can register callback functions which will be called upon value changes for sensor/actuator devices which are able to support event-driven communication.

Watch out for more blog entries on this library as it evolves...

That's it – keep it simple!

Martin Lund, System Architect - Ixonos

HTML5 Canvas GUIs

The HTML5 Canvas is an excellent graphics platform for creating portable graphics. It supports hardware graphics acceleration, where available, and it doesn't even have to be hard to code.

Recently, we've been building GUIs for embedded devices, and with HTML5 being clearly a technology on the ascendancy, we wanted to use web technologies for our embedded demonstrator. Obviously performance was a question, so we decided to work entirely on the HTML5 Canvas element, as used in many graphics-intensive web games.

So how does a HTML5 Canvas approach compare to the more usual approach. Let me show you a short example, where first we create an app using HTML / CSS /JS - the traditional way. The other example is a similar app, built on the GUI library we created on HTML5, in pure JavaScript.

The app simply animates five icons across the screen, repeating infinitely.
The first part of any HTML app is the HTML. Here we declare the image items that we will animate

<body onload="someFunction();"> <img height="64" id="anima" src="png/a.png" width="64" /> <img height="64" id="animb" src="png/b.png" width="64" /> <img height="64" id="animc" src="png/c.png" width="64" /> <img height="64" id="animd" src="png/d.png" width="64" /> <img height="64" id="anime" src="png/e.png" width="64" /> </body>
Then the CSS. We define the animation that will be shared by all img elements
@-webkit-keyframes move
{
   0% { -webkit-transform: translateX(0px); }
   100% { -webkit-transform: translateX(500px); }
}

.anim
{
 -webkit-animation-name: move;
 -webkit-animation-duration: 5s;
 -webkit-animation-timing-function: linear;
 -webkit-animation-iteration-count: infinite;
 -webkit-animation-fill-mode: forwards;
 position: absolute;
}

Finally, the JavaScript part, where we first find the img elements we wish to animate, then assign the style we need.

function someFunction()
{
 items.push( document.getElementById('anima') );
 items.push( document.getElementById('animb') );
 items.push( document.getElementById('animc') );
 items.push( document.getElementById('animd') );
 items.push( document.getElementById('anime') );

 for( var i = 0; i < 5; ++i )
 {
  items[i].className = items[i].className + " anim";
  items[i].style.top = ""+(70 * i)+"px";
  items[i].style.left = "10px";
 }
}

As you can see, this involves three snippets of code. Now, HTML5 Canvas GUI coding doesn't have to be hard. Not when there is a bit background work done. Check this out.

function initApplication()
{
 var btns = [];
 var btn_paths = ["a.png", "b.png", "c.png", "d.png", "e.png" ];

 for( var i = 0; i < btn_paths.length; ++i )
 {
  var btn = new ItemDecorator( 0,i*70, 64, 64, btn_paths[i] );
  addItem( btn );
  var anim = new AnimMove( btn, 500, i*70, 5000 );
  anim.repeatForever();
 }
}

Here we've created the exact same animation using a decorator item in our widget library. As you can see, the amount of code is less than a third of the traditional way. And the code is very easy to read.

That was a quick look into full-canvas web GUIs. Stay tuned for examples and more.

Mikael Laine, SW Specialist - Ixonos

Friday, September 6, 2013

Ixonos Embedded HTML5 Demonstrator

Recently, Ixonos has been working to demonstrate the feasibility of using HTML5 as a graphical user interface (GUI) platform on embedded devices.

A GUI needs to not only look good, but also to work well. A button press and consequent feedback must accurately reflect the internal state, and latency should be kept to a minimum overall. Broken abstractions abound when graphics design and technology implementation exist in separate silos - that's why our designers work hand-in-hand with developers. You cannot create a successful GUI by just thinking of it as a set of screenshots - it's also how it works!

Our demonstrator prototype embodies this principle, and provides an excellent open standards based platform for developing and demonstrating our capabilities as a design and software development "one stop" house.

The prototype runs on the Texas Instruments AM3359 Evaluation Module (EVM) connected to a multi-function sensor device represented by a TI Tiva C Series LaunchPad Evaluation Kit in combination with the TI Sensor Hub Booster Pack. Basically this particular prototype involves an ARM Cortex-A8 processor running at 720MHz and the InvenSense MPU-9150 MEMS chip (gyro, accelerometer, compass) combined with a Cortex-M4 microcontroller with USB connectivity. The purpose is to demonstrate a complete embedded system, from sensor I/O, to middleware and GUI.

The complete software stack consists of the following:
  • The Ixonos Embedded Linux BSP (Base Support Package)
  • Qt 5.1 framework
  • QWebView based web programming environment
  • Ixonos HTML5 canvas GUI libraries and web app
  • Ixonos data server, providing the web GUI with constant sensor updates via WebSockets
  • Ixonos "senact" library which provides an API for handling sensor and actuator communication
In the video below you can see one view from the demonstrator, involving the accelerometer and compass attached to a HTML5 Canvas widget. Enjoy.

One view from our HTML5 Demonstrator Prototype - accelerometer attached to the embedded TI AM3359 board, with an HTML5 Canvas-based compass widget.

From a business perspective, one needs to understand how to harness the relevant aspects of hardware and software for the task at hand. This is where Ixonos comes into play. We’ve created a low-footprint Linux platform, which runs a Webkit-based HTML5 runtime, with a plugin architecture that provides native functionality to portable web apps. Our motto is: "dream design deliver", and we live up to it through our interdisciplinary working mode, where low-level technology implementation meets high-level graphics design vision.

This is the first installment of our HTML5 prototype demonstrating the basics. In the future, more sensors/actuators will be added and our UX designers will add beautiful graphics to make it really shine.

Stay tuned for more blog entries on future versions of this prototype and the technologies behind it...

Martin Lund, System Architect - Ixonos
Mikael Laine, SW Specialist - Ixonos

Introducing the Tech blog, and Reminiscing on an Exciting Tech Past

First post! You have found your way to the Ixonos Tech blog, for which there has been great demand for a long time, and which is finally here. We work on really cool stuff at Ixonos you know, and that is why we eventually decided to open this window for our engineers and the internet audience to meet. As we begin, I’d like to reminisce on previous tech blogging ventures at Ixonos. The year was 2009, and a major internal project was underway. The publicity portion was a failure, but the subject matter was awe-inspiring. You can read some blog posts from that project here: secretlinuxmobile.blogspot.com. This project was done together with Fjord, a design company, and it was an awesome ride! The purpose was to build a complete Linux-based mobile phone software stack, spanning telephony integration using ofono, all the way to XML-based IPC mechanisms and a GUI framework based on Qt4. Back then we were still learning how to build graphical interfaces in an iterative agile working mode – together with graphics designers(!) – and learn we did!

Some of the key findings can be summed up by saying that good communication is 50% of a successful project of this kind. The importance of communication simply cannot be overemphasized.

The second key finding was that developer team members needed to be geared towards graphics design. Some were, some weren't. The words of my Scrumm Master trainer Jens Ostergaard come to mind: if there is a problem with the team, you may need to break the team up and build a new one - the heart of Scrum is the team!

Now years (and several GUI-related projects) later, the early learning experiences still resonate with a powerful and relevant message. Managing Agile (Scrum) teams needs to be a sober and serious undertaking, and when mixed with the challenge of having designers in the team just makes project management aspect more important. "Agile" doesn't mean easier, and it certainly doesn't mean a happy-go-lucky attitude of "let's see". Rather, the basic framework needs to be robust and rigid as ever. More on this later.

I have seen projects with "traditional" here-is-the-design-go-do-it approach, but more and more the industry takes flexible, agile working modes as a given. A software project then becomes a roller-coaster ride of constantly changing design specifications, that never are finalized, but nevertheless must be implemented! At best, this is super exciting and gives developers a chance to free their "inner graphics designer" and enjoy the creative flow. In a worst case this means a complete breakdown of understanding between designers who expect the impossible from software implementers, and developers who eventually grow bitter towards the fleeting designers who seem to be from a another planet.

And this can take serious forms. From developers who stop coming to work or who spend their days drinking coffee and watching YouTube videos, to designers who grow distant and don't even try to deliver that other version of a graphics asset, as requested by the ever annoying developer who seems to live in a cage of do's and don'ts of the technical world.

Solving real-life project management issues in design-oriented projects starts with a good project backlog. This is the most important place where designers, developers and product owner(s) can come together and speak the same language. If a product needs to have a clock, that is a goal everyone must understand. Other considerations are then the responsibility of respective teams or persons: the business perspective of what a user does with a clock, the developer perspective of which ntp system, caching mechanism and timezone setting to use, or the designer perspective of what the clock looks like. The whole team needs to be able to come together in a fireside chat sort of fashion just to get to know each other and get excited about the project goals.

A properly prioritized product backlog is not just an absolutely required prerequisite of going ahead with the development process in sprints, but also represents the most concrete projection of the project's vision. This is the place where every team member should feel excited and start dreaming about the project's final results: the product. Such team spirit-lifting is easy to overlook, but its impact is huge. The heart of Scrum is the team, and the team needs to feel they are a part of the greater project goals. This is the foundation of commitment and getting the best performance out of each individual.

Finally, to summarize, let me list a few guidelines from my experience on how to get a designer-engineer co-op project going smoothly. This is not comprehensive, but hopefully relevant to your particular needs today:

  • Start by bringing everyone together around the product backlog, or if the backlog doesn't exist, involve everyone in the initial "casting the vision" phase. Everyone must feel included and important, otherwise a downhill spiral of resentments, internal hierarchies and bitterness begins.
  • Create an environment of communication. I didn't say documentation, I said communication. And give freedom. Use Skype, Hipchat, Trello, Messenger / Lync, IRC, gmail chat etc. - do not stand in the way of whatever feels most comfortable for the team in the way of communication. As I said earlier, the importance of communication cannot be overstated. Big problems become small problems when you have help close at hand.
  • Carefully build the product backlog, and revisit the priorities often during the first sprints of the project. In real life, priorities tend to be affected not only by the business value of features, but also by the technical effort (cost). As you involve developers and designers in this, you will learn what is easy for designers may be the hardest of all for developers. This is a highly iterative process, but eventually forms the backbone of a good project.
  • Give power to the team. The developer team must have the power to decide how much is enough per sprint. And be careful to balance freedom with control, when inevitably the design team starts to request "extra" favors from the developers: this should be taken as a good sign that designers and developers are working together, but it is also a possible catastrophe brewing that results in the priorities being broken and work slowed down overall. This brings us to the final point...
  • Decide early on in the project if designers and developers are one team or two teams. A highly iterative prototyping project will benefit from developers and designers sitting side-by-side with both instructing each other in what is possible the most effective way of working. But you can forget about development product and sprint backlogs when this happens. Some of the funnest projects I've been involved in have been of this kind, but if project goals are crystal clear at the beginning, you really want to maximise effectivenes and not go down this road. Only do this if design is completely unfinished for most of the project's duration. The important point to remember, is that if developers and designers are to be separate teams, then all the insulation principles of Scrum should apply to designers too, and that means no extra favors can be expected from the developers, and communication should be restricted to guiding implementation, not prototyping fun design ideas.

At the end of the day, we are (still) at the beginning of writing the folklore of the creative development process, and as such one should have a brave mind eager to try out new things. Technology has just recently started to reached the level where we are taking functionality for granted, and now also want a matching easy-to-use user interface. Design and Development is blending into a high-level creative playfulness, and project management needs to learn a bunch of new tricks.

Mikael Laine, SW Specialist - Ixonos

How to patch X Org for Touch Screens


There is an annoying bug in the touch event handling in xserver-xorg-core which affects all Linux distributions, where if you use transformation matrices (like below), only some of the events are transformed. This causes pointer down events to be transformed, and pointer up events to not, which looks like the cursor is constantly jumping from the "correct" transformed location to the untransformed location. This problem occurs often to anyone trying to rotate a touchscreen to portrait, for example. Its mentioned on the internet in various places.
An example of what you might want to do. In this example, a touchscreen is rotated to portrait, and is scaled down a bit. This is correct, but the transformations fail for some events because of the bug.



sudo xrandr --output VGA1 --rotate right --right-of LVDS1 --scale 0.66667x0.66667
sudo xinput set-prop "QUANTA OpticalTouchScreen" --type=float "Coordinate Transformation Matrix" 0 0.36 0.64 -1 0 1 0 0 1


How to fix The fix has been created, and it is floating around on the internet. Here is the patch that I found, from my Google Drive so the link doesn't vanish : x_touches.patch

How to apply the patch I've created a script to apply the patch, because you have to do this every time you upgrade your system's xserver (which is pretty often using the normal updates). Please read the script and apply to your situation, or simply save the script as a file (say, 'patch_x.sh') and run it in some 'temp' directory where you have the patch file too (must be named x_touches.patch).
Running the script involves calling 'chmod 755 patch_x.sh' first, to make it runnable, then saying './patch_x.sh'. I hope this helps someone!



#Script for patching and building xserver-xorg-core for touch screens
if [ ! -n "$1" ]
then
dpkg -s xserver-xorg-core;
echo "PLEASE CHECK EXACT VERSION NUMBER FROM THE ABOVE LISTING (e.g. 2:1.13.0-0ubuntu6.1),";
echo "THEN RUN THIS SCRIPT AGAIN WITH THE VERSION IN QUOTES";
else
echo "Getting and building xserver-xorg-core, version $1"
sudo apt-get update;
sudo apt-get install fakeroot;
sudo apt-get build-dep xserver-xorg-core;
sudo apt-get source xserver-xorg-core=$1;
cd xorg-server-*;
sudo patch -p1 -i ../x_touches.patch;
sudo dpkg-buildpackage -rfakeroot -us -uc -b
cd ..;
sudo dpkg -i xserver-xorg-core_*.deb
fi


Mikael Laine, SW Specialist - Ixonos