Tuesday, November 18, 2014

Fast Piloting of Industrial Internet

The buzz and fuzz around Industrial Internet seems to accelerate almost daily. While it is clear that the possibilities and potential benefits are endless, it is quite difficult to decide how to reach those in practice. We wanted to create a solution that makes it easy to setup the “basic infrastructure” to enable piloting different applications of Industrial Internet in practice.

We have seen that often a lot of data is collected from Industrial automation systems through different sensoring and other solutions. If that data could be easily moved to cloud, analysed and presented to users in smart ways, it would be easier to experiment with ways to utilise that data to improve efficiency of operations or even as an enabler for new digital services.

With this in mind we took our Industrial Internet Suite -software solution and installed it on Industrial grade reference HW with 3G connectivity. Our sensact -communication framework combined with Ixonos Cloud and Remote Dashboard make it easy to “plug in” your existing automation environment and start visualising your data. The HW/SW combination supports connections through common field-buses (like Modbus). Check out the more detailed architecture from a previous post.

After having the solution in place, we put it in a nice package. See the video below about what it looks like in practice.



We now have a set of these packages that we are offering to our customers for pilot use. This way it is very easy and risk-free to try out what you can get out of already existing data and automation by adding modern “Industrial Internet” -technologies.

Contact us for more info or a trial. There's also a white paper available concerning the topic on a bit wider scope.

Jukka Hornborg, Head of Offering Management, Ixonos Plc

Tuesday, October 7, 2014

Renew web & mobile automated testing with a new Page Model -based tool

Today more and more services are becoming web based. This means that modern user interfaces are built using HTML5 and other web technologies, even in embedded environments. At the same time applications and services are becoming more complex which means that testing is more crucial than ever in bringing high quality software to markets on time.

There are several existing tools to automate UI testing mostly based on different recording and playback methods. While it is easy to develop such tools, using them and especially maintaining test assets created with them is not very easy. Also it may be hard to match reported errors to actual web page elements (e.g. “element_id_zyz_2” is missing).

We decided to try a more innovative approach. We wanted to create a tool that is easy to use, while powerful enough for automating the testing of even the most complex and dynamic user interfaces. Also the solution should have long lasting support and have a wide cross-browser support across different desktop and mobile devices.

To meet the usability criteria we developed the Page Model approach. Here’s what it means in practice:
  • Page Model is similar to the Page Object, which represents the screens of your web app as a series of objects.
  • The difference is that the Page Model has more elements:
    • Information of web app screen and model type (full screen, selected area of screen)
    • Information of Page Objects (web elements and dynamic objects)
    • Screenshot of full screen or selected area of your screen
    • Methods containing the functionality of the web page (e.g. login method)
  • Page Model transforms the information of a web page's elements automatically to textual format that can be used instantly in creating test scripts.
  • Page Model screenshot visualises the web page and it is used for selecting objects in test creation.
  • Page Model file also contains methods that can be used to execute actions present on the web page. It can be understood as a Page Model specific “function library”.
  • After creating models, test scripts and methods can be easily constructed utilizing our graphical user interface

As a technology base we chose Selenium Webdriver, which has support of the largest browser vendors and is a widely used and well maintained Open Source tool. This is completed by the also Open Source Appium -framework to enable Selenium -based testing on mobile devices.

We combined these algorithms and technologies to Ixonos Visual Test(TM), which is a powerful set of tools for making testing easier both for test engineers and developers. With our tool you can plan, create, plan and maintain test scripts and test assets more visually. You can produce easily maintainable Page Models simply by browsing your web or mobile application. We have support for dynamic locators, javascript, AJAX and other special elements already implemented. We even have a solution for accessing HTML5 canvas elements from the test scripts; something at least we have not seen before. Everything is done with a modern powerfuI UI and the generated scripts are standard Selenium Webdriver format.


The best part is that the our tools also makes it possible to automatically detect changes to web pages. Detected changes (new and missing elements, broken methods) are visualized on top of the web page (see picture) and the tool even proposes fixes for those.


One of cool things is that you can generate tests graphically using a model graph UI that defines page model transitions and methods which are used there. The tool will go through the model graph and make test cases based on it.


To see introduction video, watch this:


And to witness the same approach applied to Android devices watch this:


Got interested? Visit our product pages for more information and a free trial Windows/Linux version.

And stay tuned for next innovations!


Anssi Pekkarinen, Solution Architect/Lead Test Automation Consultant - Ixonos Plc

Monday, August 18, 2014

Ixonos Industrial Internet Suite Goes Cloud

During the last months our R&D team has been hard at work creating a complete “data-pipe” from sensors to cloud. By combining some of our existing components (like Wireless sensor data collection with BTLE, Interactive embedded touch GUIs with HTML5 and Ixonos sensact library) and creating new ones we now have a solution in place, called Ixonos Industrial Internet Suite.

Basic architecture is illustrated in the picture below. Basically we utilize our sensact libraries running on Linux to collect the data, secure websocket connection to create local Human Machine Interface views on mobile devices and ship the data to Ixonos Elastic Cloud.


Our UI framework choice for everything is HTML5. This makes our solutions run on different platforms with minimal customisation. Of course we do some tweaking to make the HTML5 apps run smooth and stable on hardware with limited resources.

Meanwhile our fellows at Ixonos Design Studios have been working magic and creating a complete facelift for the UX of the different solutions. With these guys even industrial automation can be fun and easy to use.

Take a look the video below for visualisation of the system:



Got interested? Check out more info and contacts at: http://www.ixonos.com/business-areas/industrial-internet

Jukka Hornborg, Head of Offering Management, Ixonos Plc

Thursday, March 6, 2014

Wireless Sensor Data Collection with BTLE

Bluetooth Low Energy protocol, also known as Bluetooth 4.0 or Bluetooth Smart, is a hot topic right now. At Ixonos, we have been working with it for a while and one of the examples is the addition of Texas Instruments SensorTag support to our libsensact library.

TI SensorTag is a small Bluetooth Low Energy device, which has 6 different sensors and runs on a coin cell battery with very small current consumption.

The applications, which use libsensact, are able to set TI SensorTag as one of the sensors where to connect and read the sensor data from it.

The code for connection to TI SensorTag is similar to the code needed for USB sensors in the earlier example. The difference is that you need to define the BTLE addresses of the devices instead of USB ids:


/* List of supported sensor/actuator devices */

struct ble_sensortag_config_t ble_sensortag0_config =
{
      .ble_address = "BC:6A:29:C3:3C:79",
};

struct ble_sensortag_config_t ble_sensortag1_config =
{
      .ble_address = "BC:6A:29:AB:41:36",
};

struct sa_device_t devices[] =
{
   {  .name = "ble_sensortag0",
      .description = "TI sensortag 0",
      .backend = "ble_sensortag",
      .config = &ble_sensortag0_config },

   {  .name = "ble_sensortag1",
      .description = "TI sensortag 1",
      .backend = "ble_sensortag",
      .config = &ble_sensortag1_config },

   { }
};

int main(void)
{
    int sensortag0;
    /* … */
    sensortag0 = sa_connect("ble_sensortag0");
    /* … */
}
 
The video shows the code and TI SensorTag in action on our HTML5 demonstrator prototype.



Now that the basic BTLE -support is in place we have an easy solution for bringing wireless sensors within our sensor framework. Adding support for new BTLE sensors is pretty straightforward with our scalable architecture.

Stay tuned for further updates as we are combining this and other components into the Ixonos Human Machine Interface -solution to be launched in near future!

Tero Koskinen, Senior SW Designer - Ixonos
Petteri Tikander, Senior SW Designer - Ixonos
 

Friday, January 31, 2014

Ixonos Multi-Display for Android 4.4.2 with Miracast

The Ixonos Multi-Display solution has, since the previous post, been ported to Android 4.4.2 and a few features has been added in the process. It is truly medium agnostic and works over 'whatever medium' supported by the DisplayManagerService (i.e. MHL/HDMI/MiraCast etc.) of the platform. Finally applications can be moved between displays through the 'recents' menu.

The solution addresses the limitations of the Android platform when it comes to multitasking and running several apps in parallel on different displays.

The below video shows a Nexus 10 tablet running Android 4.4.2 initially connected to a TV via HDMI playing an action flying game. New input methods like track pad and game controller has been added to the System UI to provide input for the external display. This provides mouse- and game controller input events, enabling all games that supports the standard Android game controller API to be controlled from the tablet.

At a later point in time, the tablet is connected wirelessly to the TV using Miracast via a Netgear Push2TV display adapter. This enables a true cordless Multi-Display experience where users can enjoy content on a secondary screen without the hazzle of cables.



Vasile Popescu, Chief Software Engineer - Ixonos
Mikkel Christensen, Chief Software Engineer - Ixonos
Martin Siegumfeldt, Chief Software Engineer - Ixonos
Jakob Jepsen, Chief Software Engineer - Ixonos

Friday, January 3, 2014

Ixonos Goes "Imaging Tampere Get-Together"

Companies with a presence in Tampere, Finland have started movement towards making the region a center for imaging expertise, which means focusing efforts in pattern recognition, image enhancement, augmented reality etc. With this in mind, a get-together event was held in late November, and Ixonos with its bright and enthusiastic engineers had to be there too! Other participants included many participants from the Tampere University of Technology, Intel and several startups and older players in the fields of video surveillance etc.

Instead of just showing up with a stack of callcards, though, we decided to amuse the crowd by whipping up a special demonstration software running on the Intel MinnowBoard. It turned out well, and was much loved by the participants.

Ixonos Imaging Demo The system consist of a Playstation 3 camera attached to a MinnowBoard, along with a display for visualising the imaging algorithm results.  MinnowBoard is a small and low cost embedded platform using Intel® Atom™CPU. In addition, a racing track playset with two electric cars was used as the pattern recognition problem. The software consists of the Ixonos Embedded Linux BSP (base support package) , the OpenCV imaging library and a very simple application that tracks two cars on the racing track, calculating their lap times and counts.
Minnowboard (at the back), PS3 camera, racing track!
Car recognition is done by simple color segmentation. The colors are preset, and blobs of certain color are recognised with the OpenCV routine . The centroid of each blob is then visualised on the screen, and their passage over the "startline" is tracked. Very simple. Not a display of our pattern recognition algorithm abilities (call us if that is what you want), but rather of our ability to quickly integrate a complete system where we could later drop a specialised algorithm into. And fun. The purpose was to have fun!

More detailed image processing steps:
  1. Capture image frames (640x480)
  2. Resize frames down to 320x240
  3. Blur to reduce noise
  4. Convert from BGR to HSV color space
  5. Apply filtering thresholds and create binary image
  6. Use moments to calculate the position of the center of the object
  7. Use coordinates to track the object and apply tracking visualizations on top of the image
  8. Display frames with tracking visualizations




The proud author (Ilkka Aulomaa) of the playset car recognition system
About the authors Ilkka Aulomaa, M.Sc. - author of the car recognition system software and setup Mikael Laine, M.Sc. - author of this blogpost, and participator "in spirit" in creating the demonstrator (which means sitting on a sofa and making smart ass comments). He has has written his Master's thesis under the title "On Optical Character Recognition on Mobile Devices" (later published as "A Standalone OCR System for Mobile Cameraphones" in the proceedings of 2006 IEEE 17th International Symposium on Personal, Indoor and Mobile Radio Communications. He has also participated in research in the field of pattern recognition.