Ready, set, go! Kickstarting with E.S.R. LabDays

| Bastian

When we joined E.S.R.Labs around the start of June this year, it was an exciting time for us: Getting to know a new company, new people and learning about the amazing complexity of handling full-stack software engineering not only on ECUs from very low-level we-control-single-pins-on-microcontrollers to fairly high-level we-have-a-Java-VM-with-Android-running-on-our-controller, but going even beyond the boundaries of the embedded devices.

“We”, this is Tom, Wolf and Basti who all are experienced in software engineering and decided to join E.S.R.Labs in various roles.

Right after we joined, we were offered the option to participate in the LabDays this year: Go to the “cabin in the woods” in Austria for three days and spend our time on some fun engineering projects. Obviously, we didn’t think twice and agreed to join – after all we are totally engineer-mindset-ed just like all the other colleagues and it was a great opportunity to have some fun and get to know a bigger bunch of people right away.

labdays2016_cabin So we drove together with the other participating colleagues to the cabin and first of all enhanced that wooden, traditional cabin with all our tech equipment: Laptops were unpacked, network switches and cables were put into place, power cables were distributed and microcontroller development boards were prepared. All in all, it felt a little like those LAN parties we had when we were younger – you know, getting together with friends, everyone brings their computer to play some games and first of all you try to get the network up running – with which we oftentimes failed back then because no one knew how it actually worked. And after the network was finally up, half of the scheduled time was already gone. labdays2016_work

Actually, that kind of comfy vacation feeling kept up for the rest of the LabDays – we were not strictly “working”, but it rather felt like “hacking with friends”. That probably was possible since basically all of E.S.R.’s employees are great tech-people with whom it’s simple to connect with and we all were able to focus on the LabDays without being disturbed by e-mails, calls etc.

The projects we worked on were – just like last year – defined beforehand and were set up in such a way that they’re small enough, but still ambitious to achieve within the given time. ladbays2016_robot Also people were encouraged to work on something that is not directly part of their day-to-day work, but rather on something that interests them or where they’d like to get started somehow – be it new libraries, new programming languages, new protocols or whatever one could think of. For example, we had a project where our company library should be accessible easier through an Android app, we had some colleagues getting started on machine learning, a team working on a self-driving robot, one implementing a blazingly fast IPv6 switch, someone was getting started with Clang/LLVM and another team built a digital picture frame that was identifying the people passing by via bluetooth.

As we were playing working hard, we also needed to eat something. But everything was taken care of: For example one time there was the engineers most-liked food: Pizza! But then other times we were either cooking ourselves (also very yummy breakfast!)


Or were having some barbecue.


All in all – who would’ve expected anything different – the LabDays were a great success! We, the new guys, got kickstarted to learn more about which colleagues work on what in day-to-day life, what they are interested in apart from that and were able to experience that great feeling of having joined a totally tech-focussed company with everyone participating in coding on the projects – from the software engineers to the engineers in residence and to the project managers. It was basically a great company-wide team building event, without the sometimes slightly embarrassing “games” the participants need to play on other such events, but instead totally tech-focussed and it was fun for everyone! labdays2016_outsideAnd even the chicken was delighted ;-)

We’d like to thank, first of all, the two guys who organized this whole event, Andrei and Tom, and of course our new employer, E.S.R.Labs for making this possible!

DConf 2016 Report

| Christian

This May the yearly DConf was held in Berlin. The DConf is one of the meeting platforms of the community around the D language. Quoted from the homepage:

The D Programming Language

D is a systems programming language with C-like syntax and static typing. It combines efficiency, control and modeling power with safety and programmer productivity.

For me dlang is a language in which I can program almost with the same comfort as in ruby or java with additional very strong compile time capabilities and a sane approach to concurrency. It even suggests to write small scripts in it and run them by the means of rdmd. With the latest release of DUB its package manager this will be even possible when requiring dependencies.

The last year saw a lot of changes in dlang so that it is now reigned by the D Language Foundation with Walter Bright, Tudor Andrei Cristian Alexandrescu and Ali Çehreli as officers. Several implementations of the compiler are available, including the reference compiler written by Walter, as well as compiler backed by gcc and ldc.

The conference

After Sociomantic volunteered to co-host this years conference and came up with a fantastic venue as well as an incredible organization 150 happy dlang coders showed up. Between hobbyists, many of the big names of the dlang community showed up and we’re available for chats and talks about dlang, its application and new ideas.

The three days of the conference we’re packed with a tight time schedule that was enforced by DConf’s own MC, featuring around 18 talks, some lightning talks and two panel discussions around the dlang ecosystem.

All talks are available at YouTube and UStream, I just want to highlight some of my favorite sessions:

  1. Make sure not to miss the keynote given by Andrei Alexandrescu himself. Although the presentation starts with non technical (but very important) topics of the first 5 minutes and how to contribute, it gets more technical when Andrei dives into getting rid of the garbage collector and annotations for Big O Notation.
  2. Equally enjoyable is Don Clugston’s talk about floating point numbers. This is even applicable for languages != dlang.
  3. Amaury Sechet shows how to work efficiently with your memory. One especially tricky thing are tagged pointers. I was glad to learn, that our estl makes also good use of this, although I am pretty sure, that the generic API provided by dlang is nicer.

Of big importance to the dlang community are the presentations that showed how dlang is used in production. Although those can only go so far from a technical perspective those show, that dlang is production ready and can be used to build products.

Considering all, this was one of the best conferences I ever went to and I hope to come back next year and do more dlang in the meantime. I think you can see the spirit of dlang best by looking at the enthusiasm of the speakers in the lightning talks. They would go on and on, because everyone wanted to show off some cool dlang stuff.

What to do better next time? From an organizational point of view it was a flawless event so I do not have any improvements as feedback to the organizers. But on a personal level I really must plan in more time at the end to not miss the last talks and the group shot and I also have to bring my books to get them signed by the authors. Hopefully they will be again around next year!

CppNow 2016 Trip Report

| Alexander Schlenk

As you may have seen on our blog, we like to go to tech conferences all over the world. The C++-Now is especially loved. Last year two of my colleagues were at this event – this year it was my turn.

The conference

C++Now is a conference with its own particular charm and atmosphere and is now in its tenth year. Located at a high altitude in Aspen, Colorado it is surrounded by the beautiful landscape of the Rocky Mountains. Most people tend to connect Aspen with skiing or high society, but every year after the skiing season ends a group of C++ enthusiasts gather there for a whole week to discuss the latest developments in the C++ world.

Starting originally as BoostCon, the organizers later opened the conference up to a wider range of topics. But since the beginning it has been limited to 150 attendees giving it its special character and making it easy to get to know the other participants. If you look at other events like CPPcon which typically have about 700 attendees, you won’t probably get that kind of familiar atmosphere.


My trip started with a long but pleasant flight from Munich to Denver and ended with an exhausting 4-hour drive to Aspen through darkness and rain. But after a good night’s sleep I was already meeting fellow participants in the hotel lobby for breakfast. One big advantage of a small town like Aspen. The mix of attendees was quite interesting with almost every country represented. I personally made contact with some guys from Canada, Sweden, Romania and of course the US. Even more astonishing was the variety of business sectors that people came from. Some were from the embedded domain (like me). But there were also guys working in gaming, CAD calculations and even one developer from a content delivery network (CDN) that uses C++ for its webservers.

The social highlight of the week was the BBQ dinner at the Aspen Center of Physics. Sitting outside in the sun beer in hand and enjoying a delicious American self-made burger was just the right setting to chat to a lot of different people. My personal favorites were the discussions with the firmware development lead at Apple and the LLVM Compiler lead at Google.

Interesting talks and projects

Library in a week Library in a week is a nice idea born some years ago at this conference. Imagine a lot of C++ enthusiasts gathering every morning before the actual conference and working together on a certain project. This year’s project was about kickstarting a new way of documenting boost in the style of We decided to try to get the current docu into some kind of wiki to offer a nice place for editing content. Unfortunately this turned out to be much more difficult than we thought! Especially since there are lot of different documentation styles available in boost. The work was far from complete at the end of the week but people are still working on it after the conference. So let’s hope for the best.

Boost.Hana Louie Donnie attended the conference last year when he presented his work on the metaprogramming library Boost.Hana. He has continued his work on the library and this year showed how some day-to-day coding problems could be solved with the help of metaprogramming.

Variants and Tupels Two reoccurring topics were Variants, which will be partially introduced into the C++17 standard as part of the standard library, and Tupels which are the base for many metaprogramming patterns. Variants are basically the safe version of a union. You can store different types of data inside the same variable, but with the advantages of C++ type-safety. Variants are being heavily discussed in the standard meetings at the moment. The current state is that there will be a library-based variant in C++17 and probably a language-based version in C++20. Tupels have been part of the standard since C++11. They are the more universal version of std::pair. Tupels become really interesting when you start combining them with metaprogramming. You can basically pass any pair of types around in your code and evaluate them during compile-time.

Dependency Injection for C++ Dependency Injection (or DI) is probably something you’re familiar with from modern, mostly managed programming languages which are based on some kind of virtual machine (like Java or C#). DI is a clever technique where the order dependency passing is reversed. You simply specify which class depends on which other class and the framework takes care of the initialization and its order. The problem is, DI normally makes heavy usage of reflection which is not (yet) available in C++. However, a library was presented which makes use of metaprogramming to generate a compile-time only variant of DI which outperforms common implementation in different languages.

CopperSpice A small team of two people calling itself CopperSpice was pretty active at the conference. They started with a fork of the old QT4 (the current version is Qt5.7) and tried to create a fully CPP++11-compatible version of it. Along the way they removed the dependency on the MOC compiler and reworked the whole signal framework. But they didn’t stop at QT. During the documentation phase they recognized the need for a better C++ documentation framework – so they forked doxygen and created DoxyPress with better C++ language parsing. They also had a challenging talk with the title “Multithreading is the answer – what was the question?”.

Cpp++14 on ARM / Ciere Consulting creating MQTT client Of personal interest to me was a talk from Ciere Consulting about implementing an MQTT client based on the new design patterns of C++14. As an embedded developer it was particularly interesting that the whole implementation was running on an ARM Cortex-M0 core with very limited resources. They were even using standard STL containers with dynamic memory allocations. The implementation should appear on their website soon.

Further Information

If you like to have a look on some of the good presentations, you can find the collection of the whole week on Github.

DAHO.AM 2016

| Oliver

stylight Last week some of us visited the conference in Munich. I hadn’t heard about it before that and was truly impressed by what the guys from stylight put out there: a solid program with reputable speakers for a crowd of inquisitive developers. It was the third event of its kind and very well received by all who joined. In contrast to most of the bigger conferences, all of the 8 talks were arranged in one track so it was possible to visit all of them. Talks were presented by Amazon, HashiCorp, Uber, Spotify, Ethcore, Google, Stylight and a little surprisingly — the Bayerischer Rundfunk (BR). On the side you could participate hands-on in workshops.

From start to finish the whole conference was a pleasure to attend. It had just the right mixture of engaging technical program and relaxed atmosphere that lasted the whole day. The theme of the conference was intentionally kept very bavarian-style, starting out with a bavarian breakfast day and ending with dinner in a bavarian beer-cellar. A nice gimmick: anyone who wanted a beer just had to mention a special hash-tag in a tweet together with their row-number and an attendant would bring them a beer instantly.

But not only the casual atmosphere was good — also most of the talks were quite interesting. Igor Maravić from Spotify talked about the event-delivery system Spotify is currently using, its shortcomings and the system that they are about to develop. Spotify has a problem that so many others would love to have — rapid growth and scaling issues with the increased traffic. Igor talked about the difficulties they are having with the current approach – that just so happened to start the day Apple Music was launched – and the design choices made for the upcoming system which uses google cloud pub/sub. spotify

Claudia Guevara gave a brief overview off Stylight and showed how they are currently moving their IT infrastructure towards using microservices. The talk was very open and allowed the audience some insights into some real-world problems that are addressed in Stylight’s engineering group.

Florent Crivello from Uber talked a little about software processes and how they manage to work on their service moving with great velocity and at the same managing time avoiding to break it (they need to guarantee 99.99% uptime). He presented his company’s software architecture history very colorfully — from spaghetti style programming over lasagna layers to ravioli-services. uber

The most surprising talk came from Mustafa Kurtulus Isik — the head of the Software Development and Platforms of the BR. He had quite a story to tell about nurturing a modern software development group in a very traditional and usually slow-moving institution. He mainly talked about the values they established and some of the key process elements they use in their day-to-day business. This was also probably the most intriguing talk. br

In the end Simon Lynen from google showed some of the amazing examples of project tango which manages to put a full SLAM algorithm including hardware into a cellphone formfactor. Something that has been in focus of Phd students, but did not yet reach the realm of cellphone mass market — both in size and performance. His thesis was that mobile phones do not currently know enough about their environment and can become much more aware of it.

“Project Tango is a platform that uses computer vision to give devices the ability to understand their position relative to the world around them.”

He finished his talk by growing grass on the virtual stage and putting a dinosaur right next to the astonished audience. google

All talks were professional and well prepared. The side-workshops seemed to be plagued by practical preparation issues and thus it was not a bad tradeoff to stay for the talks.

We really hope there will be another as we truly enjoyed it. The conference was well prepared and a lot of fun to attend. Thanks to Johann Romefort and the rest of the stylight guys for a memorable day. drinks

C++ now and in the future

| Dietrich

C++ been around for a long time. Since it’s iso-standardization almost 20 years have past. With major changes in the last years, C++ developers need to update their knowledge. E.S.R. Labs gave us a great opportunity to do that. Within a 3-day workshop by Nicolai M. Josuttis we learned the newest and hottest features of C++11/14.

C++14 drawing

To get a feel about the impact of the changes and the implication to the developers consider following cite of the master himself, Mr. Stroustrup: ” C++11 feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever”. Mr. Josuttis adapted his book “The C++ Standard Library – A Tutorial and Reference” to the new standard and it cost him an incredible amount of work to do so. To get a first look, just compare vector.h from C++98 to the version from C++11.

This article gives an impression about C++11/14. Different to other articles about the new standard, this one is not going into technical details. Multiple books and sites are definitely better suited for this. Instead this blog entry tries to give a different view to the newest standards and present own thoughts on standardization, situation in our industry and more.

C++ history and current status

Adding object orientation to C was one of the main goals of Bjarne Stroustrup. Thus he started implementing “C with Classes”. Later it was renamed to C++, one of the most popular general purpose languages ever. If you are curious about some real world applications that use C++, you can take a look at C++ applications.

Contributing to it’s success was the standardization process started in 1998. The following pictures shows timeline containing releases in the past and near future.



Initially standardized in 1998 there was little to none change in the next years. Starting in 2005 the standard committee kicked off the informally named C++0x. Also experts in the committee are no magicians, so it took another six years until the new standard C++11 was published. Containing an huge amount of changes and new features, this release impacts the popularity of C++. While C++14 was a minor release, the current project C++17 is going to introduce new features again. Highly influenced by the boost library we will see multiple adoptions of key features of boost into the future releases of the STL.

Workshop Impressions

The workshop was held by Nicolai M. Josuttis. As a known author of multiple books in the C++ world and member of the standardization committee, he gave us a great insight into the deeps of our main programming language. Switching between examples and theory helped us to understand the impact of the newest version of this language. During coding demos, regular C++03 source code was transformed step-by-step to new fancy looking C++11/14 compliant code. After each change the performance was compared. Long story short, the new standard provides some nice performance improvement possibilities. One of the greatest part of this workshop, was the implications for different developers. Divided into application developer, class designer and generic programmer seeing the impact of the new standard for our work was well summarized. Last but not least we got some insights about the standardization process itself including challenges and possibilities.

Stakeholder’s wishes

Not only software development industries like automotive need to care about their stakeholders needs, also the standardization is impacted by it. Imagine you sit in a giant conference room discussing possible features and changes in new versions of C++. Sounds good right? Let’s take a closer look and check if it is really that fun.

Reasons why C++ standardization is hard

  • No ultimate single instance that makes critical decisions.
  • If the compiler company doesn’t want it, throw it away.
  • Backward compatibility.

Although the standardization led C++ to an incredible success, the process itself is not easy. It is totally possible and is actually happening, that some features that are introduced in newest releases are not liked by Mr. Stroustrup. Compared to different open source project policies this is huge difference. Linus Torvalds for example could reject any feature he want and thus is able to decide critical issues by himself.

With a huge amount of ideas from different people there is also the counterpart in the compiler companies. Doesn’t matter how great your idea is, if the compiler doesn’t support it, you lost the battle. By the end of the day you need to convince your stakeholder like in any other industry.

And the last point is that backward compatibility is one goal of the whole development. This leads to some interesting and confusing constructs over the time. Herb Sutter said following about the constraints of compatibility.

“Yes, C++ is complex and the complexity is largely because of C compatibility. I agree with Bjarne that there’s a small language struggling to get out — I’ve participated in private experiments to specify such a language, and you can do it in well under half the complexity of C++ without losing the expressivity and flexibility of C++. However, it does require changes to syntax (mainly) and semantics (secondarily) that amount to making it a new language, which makes it a massive breaking change for programs (where existing code doesn’t compile and would need to be rewritten manually or with a tool) and programmers (where existing developers need retraining). That’s a very big tradeoff.”

Read the full article of Herb Sutter here.

Modern world vs. embedded world

I don’t want to generalize, but the our beloved embedded industry is somehow lagging behind when it comes to the newest trends in software development. Just ask yourself, is my embedded company using the newest C++ version or even an object-oriented-language? One of the main reasons is the compiler. Some of the compilers in an embedded environment doesn’t support the fanciest newest features. E.S.R. Labs for example uses the Diab-Compiler, which has currently no support for C++11/14.

Moving to C++ 11 / 14

  • Is there a possibility to transform my old C++ code to modern C++11/14 code? Yes, try the modernizer by clang and evaluate for yourself.

Clang modernizer

  • Different C++ compilers support different features. Following website provides information about testing recommendations regarding new features. Also you could check following website, that list the support of new features of common C++ compilers.

Feature-testing recommendations for C++

Compiler support

  • Additionally you should take a look at the C++ core guidelines maintained by Bjarne Stroustrup and Herb Sutter.

Cpp Core Guidelines


On Behalf of the Participants we would like to thank E.S.R. Labs and Nicolai M. Josuttis for the great learning opportunity. The next step is to evaluate the advantages under embedded constraints.

LabDays 2015: A cabin in the woods

| Henri

This year we had our annual hackathon in the Austrian outback in a region called Bregenzerwald. In a beautiful environment ESRLabs employees gathered into a summer cabin for three days, devoting their time to cool software projects. In Bregenzerwald the scenery is spectacular and it provides great possibilities for outdoor activities in the woods and hills. The weather also happened to be unusually warm considering the highland altitude.cabin

The surroundings were of lesser importance however, because our mission was to dedicate our time to hacking. The essence of the LabDays is to let people stretch their imagination by bringing in new ideas and gather a team to implement them. Optimally the idea is concise enough to be implemented in just a couple of days, but it can also be part of a bigger project. In the end of the event each team should have a result that they can then present to others and have it available for further development. The important thing is that everyone has a chance to work with different people and on different topics than they normally do during their daily work.choosing topics

Project ideas that popped up included an automated plant watering system, an embedded Web server, a publisher/subscriber framework, a JavaScript visualization for AUTOSAR and a card game running on Android. With a variety like this, there was something interesting for everyone and they could pick a task where they could use their existing skills while also learning something new.outside-coding

Because our cabin was in the wilderness in the middle of nowhere, we also had to prepare the food ourselves, thus another big topic in the event was cooking. The food was delicious, indicating that ESRLabs might have some future in Catering if we ever go out of software

When not coding or cooking, people were having great time taking part in various recreation activities like biking, swimming, hiking or having a beer.bridge river bikes

The place itself resembled a small farm because it was swarming with animals, like ducks, gooses, a lot of flies and an omnivorous dog called Nico.ducks dog cabin again

Sometimes the hacking went out of hand and lasted throughout the night, ending up passing out on the keyboard and waking up with a headache.night-coding

“I will never hack again.”matthias

Finally, there were the demos. All of them were inspiring, and many projects will be developed further after the LabDays either as a side project or during the next such event. In addition to the nice LabDays T-shirts, people went home with a bunch of mini projects, some of which might even grow to be real projects one day.demos

TUM workshop 2015

| Andrei

Every year we take the time to prepare a workshop together with the department of Computer Science TU Munich for the students. Last years the task for the students was to program Java on development boards running an Android OS. This year we wanted to take it to the next level and give them an idea of how we work in one of our projects where we build a car-sharing solution.


The task involved writing native code for an embedded chip and implementing an application on running on an Android controller. Here is a description of the main task. The students formed teams of 4 which were further split into two sub-teams to solve the the two different tasks, one team for each controller.

To make it feasible for the students to complete the task, we provided a test suite to them to check their implementations. If all the tests were green they could submit their solution and get to see it running in our BMW i3 test car.


In total more than 100 students participated and at the end of the exercise we had 16 working solutions. Last Tuesday we invited all of the teams that successfully completed the assignment to our office so they could see their software running on a real system. Tuesday came and it was a perfect day for a test drive — the sun was up, as were the temperatures (maybe a tad too much but we shouldn’t complain since we don’t get many days like these in Munich). We thought it appropriate to give them some hints on how they could improve their coding skills, thus we started the event with a presentation on some of the common mistakes and how they could further improve. Afterwards we went to the car and began the test drives.

Team after team, everybody got to see that their solution was deployed and then actually running in the car! While one team was testing, the others would relax by the pool (and yes, we have a pool in our office complex :).


At the end we elected the 3 best performing teams and awarded them some prizes (the team members of the winning team received each a Raspberry Pi 2). All in all we had a lot of fun preparing the workshop and also spending the afternoon together with the students test-driving their solutions.

C++ Now 2015

| Tom

by Tom and Frank

When people hear the name Aspen, most of them will think about high society and skiing. Especially during winter season that might be an appropriate description of this small town surrounded by snow-covered 4000 meter giants in the Rocky Mountains. In late spring, after the snow crowd left, you will find scenic Aspen taking a breather: quiet, nearly deserted and most winter and fashion stores in the city center are closed. But there is a week in May, where a horde of C++ enthusiasts gather there for the C++Now. It’s a non-profit conference that was initiated as BoostCon by Dave Abrahams and Beman Dawes in 2006 as a forum for the Boost community to meet face-to-face. In 2012, it was opened to general C++ topics accompanied with a renaming. It is organized by Boost and the Software Freedom Conservancy and chaired by Dave Abrahams and Jon Kalb. The conference is still limited to 150 attendees, which makes it a great place to get in touch with C++ experts and some of the most influential people of the C++ community. All sessions are held either in the Paepcke Auditorium of the Aspen Institute or the buildings of the Aspen Center for Physics that are scattered around it in the meadows outside of Aspen.

Out for a road trip

After the good experiences I had at C++Now 2013, we were glad that ESR Labs sent us to attend this year’s C++Now and sponsored the trip. As the conference started on a Monday evening with registration and informal gathering, we could use the weekend and arrived a couple of days early for a short excursion through the neighboring state Utah. We visited Arches National Park, took the venturesome Moki Dugway up the Cedar Mesa and followed an insider’s tip to Muley Point, offering breathtaking views over curling San Juan River, the desert landscape of southern Utah and northern Arizona with Monument Valley in view distance. We hiked to the Sipapu Bridge (world’s second largest natural bridge) within Natural Bridges National Monument, visited Capitol Reef National Park and had a great time at Bryce Canyon, hiking to the ground of Bryce Amphitheatre and watched both sunset and sunrise for some fantastic pictures and time lapses.

The conference

Back to Aspen, we caught up with some known faces from two years ago and got acquainted with the other attendants. The first “real” conference day started with a short welcoming speech of Jon Kalb, followed by a very special C++Now session: Library In A Week (LIAW). Jeff Garland invented this format for the very first BoostCon with the goal to create the groundwork of a new Boost library with all interested attendees during one week. It was so successful that it became an institution coming up every year since then.

The talks were divided into two main categories. Either they described, how the Standardization Committee and Boost are driving the development for future language extensions or they showed some practical use of modern C++14 features. Here are our favorites in both categories:

Future of C++/Boost

Andrew Sutton: Concepts Lite

Before Andrew joined the University of Akron as an assistant professor, he was working as a postdoc researcher together with Bjarne Stroustrup and Gabriel Dos Reis at the Texas A&M University. Their work focused on language support for generic programming. Concepts Lite is a language extension to allow the programmer to add some formal properties to templates for formal verification and improved compiler diagnostics.

Eric Niebler: Ranges

Eric is a C++ library expert and currently working for the Standard Foundation on Ranges for C++. They will add lightweight views on container classes without the need of iterating over them. They will presumably become available as part of STL 2.0 with C++1z or even later.

Louis Dionne: Boost.Hana

Louis is an incredibly smart, franco-canadian college student from Quebec who received a scholarship by Boost to work on the library Hana during his studies. He proclaimed a paradigm shift in meta programming by using his library which intensively utilizes new language features of C++14/1z.

Practical C++

Scott Schurr: constexpr

Scott is an embedded software specialist at Ripple Labs (Portland, OR) and he demonstrated some examples where the constexpr feature of C++11 and especially C++14 can be used to execute computations at compile time.

Sebastian Redl: switchAny

Sebastian from Vienna is a contributor to Clang, a Boost library maintainer and currently finishing his master thesis. To solve a real world application, he demonstrated the usage of Template Meta Programming in this hands-on session. He used nearly all TMP variants to create a small toolset which can switch on a type contained in a Boost.Any holder.

Boris Kolpackov: A new build system for new C++

Boris is CHO (Chief Hacking Officer) at Code Synthesis in Cape Town (South Africa) and proclaimed the new cross-platform build system he’s currently working on.

Around the conference

Other remarkable things for us were Tuesday’s dinner in White House Tavern, talking to Chandler Carruth (C++ Language and Compiler Lead @ Google) about his work at Google, the new CppCon and the insanity of ISO C++ meetings. At the infamous picnic on Thursday Louis Dionne was shocked that we thought he was French. And Jeff seemed unable to moderate his LIAW session the next day after having “quite a few beers” with us:)

Some infos

All presentation material can be downloaded via GitHub and the video recordings will be made available on YouTube. By viewing these you might get an idea of the knowledge that was transported during the C++Now. But you have to travel to Aspen, meet these people in person and talk to them during coffee, lunch and dinner breaks to get the full picture of this unique experience.

Collage Blog1

Google I/O 2015

| Veronika

“…That kind of work, it is inherently born of the human spirit. It is a little badass and beautiful. It is tech infused with our humanity. And it does not have love sprinkled on at the finish. The work itself is an expression of love. That’s what it feels like to us, in this fast boat, in this small band of pirates, trying as best as we can to do some epic shit” – here I quote the ATAP (Advanced Technology and Projects) presentation on Google I/O 2015. And I guess those words perfectly describe the atmosphere at the conference. The point is, it’s what is inside of you that defines whether you will enjoy something or find it boring.

I’ve read dozens of posts related to Google I/O 2015. A lot of them claim that the conference had a lack of presenting new devices and revolutionary stuff. However, that is not how I feel. The previously mentioned ATAP presentation showed us a bunch of breathtaking projects!

Project Soli – tiny gesture radar that recognizes micromotions of a human hand

Project Jacquard – touch-sensitive textiles that can be used to manufacture everyday clothes

Project Vault – secure computer the size of a microSD card

Project Ara – modular smartphone

I am sure, if you watched at least one video above – you’ll say that it’s just awesome. That is exactly how I felt not only on the first, but also on the second day of the conference as well. Screw the jet lag and bad weather – I was truly happy to be there and absorb all the information and emotions in real time!

In case you have not watched the keynote and sessions already, I will highlight some of the major things that were presented at the keynote.

Android M – I am a fan of the new permissions philosophy and Doze :)

Google Now On Tap – Google analyzes everything. Not only when you google it :)

Google Photos – face recognition, better search, unlimited storage.

Offline Google Maps – with ability to search offline!

Android Wear Always-On Screen – most important things are always there on your wrist.

Project Brillo – new approach to building a smart home.

New Cardboard – improved model of a cardboard and Expeditions project.

Two days of the conference were over in a flash – and it was kind of sad to see everything wrapping up. For me Google I/O 2015 was not only interesting, but also inspiring. When you see tremendous amount of work that is done not by magicians, but by ordinary people – it is quite hard not to get inspired. I met a lot of wonderful developers, including our former colleague Sebastian, who, as a Googler, contributed to making my trip to the conference possible – thanks again, Sebastian!

Though I would not argue that the Golden Gate is beautiful, I did not fall in love with San Francisco – it’s just not my type of a city. The flight was also quite exhausting – but who cares? I attended Google I/O and certainly hope to come back again! :)

Labday event: SPI Shock

| Oliver

Once or twice a month we all take a day or two out of our ordinary business and organize a lab-day. This is primarily meant to give everybody the chance to work on topics not related to the daily work, learn about new technologies & techniques and to have some fun developing something. Last week we had a 2 day event because there was a pressing issue that could not wait: Some people needed to be saved. Here is the excerpt of the story:

It is the year of 2072. You and your team are a special force hacker commando whose task is to free the people from the Citadel Space station where the special AI computer Shodan had captured several people. A hacker has disabled the Shodan Robot Law interfaces, thus Shodan went beserk. Shodan has turned off the life support on the Space Station. Currently the oxygen reserves are for 2 days, after this all people will be die. Your team have a limited time to access the main frame where Shodan is located and to upload a shutdown code to the kernel that will turn off the main AI process. Unfortunately Shodan has disabled all the terminals, closed all SSH ports. The only possibility to upload the shutdown sequence is over a special SPI HW Interface which is used for emergency cases. Thus you have to access the flash drives over the SPI protocol. For this task your team is given a super-mega-mini-powerful Raspberry PI mini computer. This will be the master interface. For the emergency slave interface an Arduino Board will be used as an emulation. Feel free to chose the compiler/programming language.


SPI is a technology that is widely used in the embedded space and we need it in almost every project. Still not very many people are familiar with the details of the protocol. Two of our developers that actually do know the details took the time to prepare a labsession a couple of weeks ago and presented a talk about SPI. With this as background information all of us had at least a basic grasp of the SPI protocol. Hearing about a technology is good but nothing beats actually doing it yourself. So we decided to implement our own SPI driver just for fun using a Raspberry and an Arduino Uno to talk to.

spilabdays2 spilabdays1

The labdays provided a good context to work on it. And 2 full days are a very suitable timeframe to actually get something working. We grouped in teams of 4 and started hacking. As always it took some time to setup the hardware together with the basic development tools and to get used to the process of coding, testing and deploy the software to the target(s). For the Arduino this is made incredibly simple by the tools that are readily available for the major OSs. Just implement some functionality in C, compile & flash to target via USB. Couldn’t be simpler. For the raspberries a SSH connection is all you need to deploy your code. Some teams used C++ for their implementation, others used python or even go. All in all the developing experience with both the hardwares is rather smooth if you compare it to the hardware that we usually have to deal with.


After the 2 days of implementation and disbelievingly starring at oscilloscope screens some teams finally cracked the challenge and were able to upload the virus to Shodan to kill it. All people got out alive and it actually was a really helpful exercise to dive down into a very important embedded protocol. Looking forward to the next session!

C++ Trait Mechanics — Part 2

| Stefanos | Comments 2

In this post, we are going to discuss about more advanced uses of type traits. This entry continues the discussion where we left off in C++ Trait Mechanics — Part 1, so if you haven’t read that, now is a good time to do so.

The tl;dr version for those of you who don’t want to revisit the previous post: We were given a legacy embedded application which was used to control a certain function of the car, in our car the seat heating. There were some requirements of the application which probably led to why the code was written is such a way. Although all the seats had to be heated, there were some subtle differences to be found per set. For example number of input and output signals.

To tackle these differences the original approach used multiple switch/if/else statements that were scattered throughout. The code therefore was rather huge in terms of ROM and also quite slow for the functionality it provided. It also was quite a nightmare to maintain. For example, a single change for a certain seat would have to be tracked in all those if/else/switch cases and changed. We thought that we could solve the above problems better and more efficient by using trait mechanics and templates.

Some feedback I’ve received from the previous post:

  • Everything is now in ROM, your executable grew too large.
  • Why bother with templates at all? I can just write four distinct classes, each of which with one function and it will be perfectly optimized, inlined and I won’t have this template stuff!

For the first bullet point: it is true that template programming does depend on compile time constants and type inference, so we do need some ROM. However multiple if/else statements also need ROM for the amount of code that is to be executed, plus they are not as efficient in terms of performance. As always with template programming you need to keep an eye on your generated code ROM footprint.

For the second bullet point the answer is yes, one could implement different distinct classes. But what about if we had 10 buttons? Or 20 buttons? Do we really need 20 distinct types? Would that be easy to maintain? What if 15 of those 20 buttons had identical functionality, while 3 had some extra functionality and the last 2 were a mix from both functionalities?

This is where type traits do shine. In our previous example we had four buttons which were reading some signals and we were outputting different signals based on their type:

void ButtonController::updateButtonLeds(unsigned id)
    switch (id)
    case BUTTON_ID_1:
    case BUTTON_ID_2:
        updateOutputZ(_buttonTwo._signalZ + _buttonTwo._signalY);
    case BUTTON_ID_3:
        updateOutputZ(_buttonThree._signalZ + _buttonThree._signalY);

Figure 1.

The main points here are:

  • All types output signal X and Y
  • One type outputs signal Z as it is
  • Two types output signal Z plus signal Y

One approach would have been to use distinct types for each scenario. Another would be to have a virtual output function which would have to be overridden in three concrete classes. Our approach was to have a base class outputting always X and Y:

template<class ButtonTraits>
class ButtonBase
    void read()
        _signalX = typename ButtonTraits::ReadSignalOne()();
        _signalY = typename ButtonTraits::ReadSignalTwo()();

    void update()
    typename ButtonTraits::SignalOneType _signalX;
    typename ButtonTraits::SignalTwoType _signalY;

Figure 2.

And an extended one which is taking one template argument “EnableSignalAddition” which defaults to void. Now this may look like some kind of mystery, but this little argument can help us with our “Signal addition” problem. Since we have a default argument, if we provide nothing when instantiating the class, the compiler happily puts a void in that place and gives us the “default” implementation of the class which would just print signal Z as it is:

template<class ButtonTraits, class EnableSignalAddition = void >
class ExtendedButton : public ButtonBase<ButtonTraits>
    void read()
        _signal = typename ButtonTraits::ReadSignalThree()();

    void update()
    typename ButtonTraits::SignalThreeType _signal;

Figure 3.

Now how can we solve the signal addition problem? One approach would have been to explicitly specialize the class template for the other two types:

class ExtendedButton< BUTTON_ID_2, void>
//provide special update function

class ExtendedButton< BUTTON_ID_3, void>
//provide special update function

Figure 4.

Would this have worked? Yes. Do we like this? No. Why? Well we kind of copied/pasted the same class twice for no apparent reason other to simply provide an implementation for a function. You’ll say : “But wait, isn’t this what we wanted to do?” Yes but how about we let the compiler work for us?

template<class ButtonTraits>
class ExtendedButton<ButtonTraits,
      typename std::enable_if< 
               std::is_same< ButtonTraits, ButtonTwoTraits>::value ||  
               std::is_same< ButtonTraits, ButtonThreeTraits>::value
                             >::type > : public ButtonBase<ButtonTraits>
    void read()
        _signal = typename ButtonTraits::ReadSignalThree()();

    void update()
    typename ButtonTraits::SignalThreeType _signal;

    typename ButtonTraits::SignalThreeType calculateZ()
        return static_cast<ButtonTraits::SignalThreeType>(_signal + _signalY);

Figure 5.

You are probably wondering what this mumbo jumbo enable_if stuff is all about. It’s quite easy actually. The implementation of enable_if looks like this:

template<int B, class T = void>
struct enable_if { typedef T type; };

template<class T>
struct enable_if<false, T> { };

Figure 6.

For any given type T and any int B other than false it provides a typedef of T, otherwise it doesn’t provide a typedef. The std::is_same gives true if the two template types are the same, and false otherwise. This expression:

typename std::enable_if< 
         std::is_same< ButtonTraits, ButtonTwoTraits>::value ||  
         std::is_same< ButtonTraits, ButtonThreeTraits>::value
                       >::type >

Figure 7.

Will actually provide a type, if and only if the ButtonTraits are of type ButtonTwoTraits or ButtonThreeTraits. So with this partial specialization, the compiler will generate two “different” classes for those types, but we’ll have only a single class to maintain. You can imagine how this scales with even more types. So in this partial specialization we provide a calculateZ function which does what we want.

So, now with a single partial specialization and no polymorphic or virtual functions we have the desired behavior. In a sense, enable_if allows us to group partial specializations into a single implementation. Finally let’s imagine that for some reason, one of the above types has an “extra function”, while the other has none. Since we have already partially specialized this class to get there, that way is no longer an option. So what do we do? A virtual function and a concrete class would solve this. But then we have the performance overhead + an extra class to maintain. We don’t like that. How about some old plain overloading on types?

template<class ButtonTraits>
class ExtendedButton<ButtonTraits,
                     typename std::enable_if
                        std::is_same< ButtonTraits, ButtonTwoTraits>::value ||  
                        std::is_same< ButtonTraits, ButtonThreeTraits>::value
                     >::type > : public ButtonBase<ButtonTraits>
    void mySpecialFunc()
    void mySpecialFunc(ButtonTwoTraits)
        // Do my special thingy here

    void mySpecialFunc(ButtonThreeTraits)const

Figure 8.

Same functionality without performance costs. Chances are the optimizer is smart enough to avoid linking mySpecialFunc(ButtonThreeTraits) all together since it does nothing at all. Here we just exploit the fact that we have a distinct type which will result in the correct overloaded function being called.

To summarize in a few points:

  • Idioms like enable_if allow us to control code generation at compile time, thus providing us all the tools we need for special implementation based on types.

  • Having a traits type for a class, allows us to provide different overloaded methods based on that type without having to resort to virtual functions for the same effect.

  • Maintaining or extending such a code when a new type is introduced is not really that difficult since we need to change/add code in very few places.

  • If virtual dispatching and the associated runtime costs are a problem for you, as it was for us, then static dispatching is the way to go. Of course one has to always weight the pros and cons, e.g. runtime vs. ROM consumption.

I hope you enjoyed this article as much as I did writing it.


A field trip to the Linux Conference

| Florian B.

At E.S.R.Labs we are always interested in improving ourselves. One way to achieve that is by attending conferences, workshops, hackathons, barcamps or happenings like that. Luckily these field trips are welcome and sponsored by the company.

Nico and I decided to visit the (Embedded) Linux Conference in Düsseldorf in October. The preparation time was quite short, as the decision was made one week ahead of the event. Jessi, our team assistant, managed to book flights, hotel rooms and conference tickets for us in no time…basically we had nothing to prepare. Many thanks Jessi for that great service!


Here is our personal journal:

Day 0 We arrived at Düsseldorf’s airport on Sunday morning and picked up a DriveNow car. Quite fascinating that you can unlock these cars with your smartphones. We drove to the hotel, dropped off all of our luggage and headed out to visit downtown Düsseldorf. Later on we went to the conference’s “first time meet and greet event”. It was a great opportunity to meet new people and have some interesting conversations. Afterwards we went out to try some of Düsseldorf’s well known “Alt Bier”. A type of beer which I still need to get used to. We would have gone to bed at a reasonable time but, unfortunately we missed the last ubahn …

Day 1 – 3 Like on a regular Monday we got up with a lack of sleep. Luckily, coffee usually does its job. We found one in the hotel restaurant and started out with a great breakfast. Afterwards we headed to the conference center. First of all, like everybody else, we registered and got our LinuxCon Badge and Shirt.

At 9 am the first keynote started, and the conference immediately was on the roll. We dove into presentations, discussions and all that conference stuff all through Monday and Tuesday (you’ll find a list of our personal highlights and trending topics at the end of this blog entry).

But Wednesday wasn’t a regular conference day, so we had a mission statement: “BE EARLY AND GET A SEAT!”. Why? Linus himself had something to say…

…and actually, people wanted to hear what he has to say. So in between 9 and 10 am about 1500 people tried to enter a room which is supposed to be for about 1200 people. But somehow everybody had some kind of space. Lucky we had the right mission statement for the day :).

Our personal highlights of the conference:

Day 1

  • Enhancing Real-Time Capabilities with the PRU
  • 12 Lessons Learnt in Boot Time Reduction
  • Bluetooth Low Energy and Internet of Things
  • Choosing your system C library

Day 2

  • Embedded Android Workshop
  • Demystifying Android’s Security Underpinnings
  • USB and the real world
  • Chrome OS internals

Day 3

  • Fast Boot: Profiling and Analysis Methods and Tools
  • Advanced Linux Server-Side Threats
  • Network Queuing is All Wet

Many slides were published here

Trending topics of the conference were:

Overall we learned many new cool stuff and got extremely motivated to grab deeper into new technologies. If you haven’t been at a conference: try it and be inspired!

See you in Dublin 2015!

Cheers Nico & Flo

C++ STL for Embedded Developers

| John Hinke | Comments 13


C++ embedded programming is very difficult. There are some limitations that are not always present in traditional programming environments such as limited memory, slower processors, and older C++ compilers. Embedded C++ programmers must typically avoid using new and delete to avoid memory fragmentation and to maximize the amount of memory that they can use for their applications. Writing C++ applications that don’t use new and delete is quite the challenge! Unfortunately this means that most of the standard C++ library is not usable. But what if we could use the STL? Wouldn’t it be nice to write something like this:

void processCANFrames(const vector<CANFrame>& frames)
    for(vector<CANFrame>::const_iterator frame = frames.cbegin();
                                         frame != frames.cend();
        // do something interesting with the CANFrame

Now you can!

At E.S.R. Labs, we have many years of experience writing high-quality embedded C++ applications. We have developed a set of best-practice processes and frameworks to support writing high-quality embedded applications. One of the libraries that has been very useful is our Embedded STL (ESTL). Our ESTL looks and feels very similar to the normal C++ STL. However, our ESTL does not use new or delete and is optimized for embedded development. Our ESTL also uses only C++98 features since those are more portable to the various embedded devices we are supporting.

Rewriting the STL to avoid new and delete is a challenge. Nearly all of the container classes in the STL use new or delete in some way. This means we must change the way we think about the containers. In some cases this requires us to impose limitations on the container that are not present in the standard STL.

No new or delete

What can programmers do if they shouldn’t use new or delete? This is the tricky part. The size of a collection or the elements in the collection must be defined at development time. It cannot be done at runtime. While this might seem like an unreasonable limitation for most applications, for embedded systems it is actually quite common and leads to more robust applications.

For example if you are building an application for a car it is much better to know during development that you don’t have enough memory than to have the car application crash while the car is driving due to a random out of memory error caused by memory fragmentation. That wouldn’t be very nice and would probably be quite difficult to debug.

Design Goals

When we designed the ESTL we had the following design goals:

  • Improve the quality of our embedded C++ code.
  • An API that looked as close to the normal STL as possible to reduce the time necessary for new developers to learn our library.
  • Re-use as much of the STL as possible such as the algorithms and iterator concepts.
  • Remove implementation defined behavior.
  • Improve the debugging of embedded applications.

We wanted to provide a library that would reduce or eliminate some common development problems we had experienced such as the previously mentioned memory fragmentation issues. By also providing an API that looks and feels like a standard STL API we can reduce the learning curve for developers who are already familiar with the STL. And most importantly we wanted to eliminate any implementation defined behavior that can be quite common in the STL.

Container Overview

Our ESTL API has a good suite of STL-like containers.

  • Array
  • Sorted array
  • Forward list. An intrusive list.
  • Deque. A double-ended queue.
  • Stack
  • Queue
  • Priority Queue
  • Array map
  • Vector

There are some differences however. The forward list is an intrusive list which means that the elements in the list must inherit from a common node base class. This requirement is because we cannot use new and delete so the elements inserted into the list must contain the next pointer.

The sorted array class doesn’t exist in the STL but we added it because we found that it was a very useful class to have.

The array_map class is slightly different from a normal map. In the array map we must specify the maximum number of key elements in the map. In our case the map class is implemented as a sorted array of pairs.

Simple Example

Our first example looks at the common vector class. To avoid using new and delete we must tell the container how much space we will need in the vector. And we have to do that at compile time, not run time. We make the size of the vector part of the template signature. For example:

template <class T, size_t N> class vector;

We can then create vectors of different sizes:

vector<int, 10> aVec;
vector<int, 50> anotherVec;

The keen observer will notice that any function that accepts a vector will need to be templatized on the size_t parameter. But this seems silly since we would then need to change all of our functions to accept a size parameter. We have solved this problem in a very clever way. We have two vector classes. One is used to declare a vector while the other is for using a vector. For example:

namespace estl {
  // use this vector as parameters to functions
  template<class T> class vector { }

  namespace declare {
    // use this to declare a vector
    template<class T, size_t N> class vector : public vector<T> 

class MyClass
    // use a vector of int. The size does not matter.
    void myFunction(const estl::vector<int>& vec);
    // declare a vector and specify the size.
    estl::declare::vector<int, 10> actualVec;

This way your code can use vectors of any size. Our vectors also support the standard iterator methods so you can continue to use the standard algorithms. All of our containers support this pattern. The data structures can be used without knowing the size of the containers. Only when you declare the container do you need to specify the size. We have placed all of the declaration classes into a declare namespace.

API Changes

We had to make a few changes to the normal STL API to make it efficient and obvious. For example in the normal STL vector there are methods that will change the capacity or size of a vector. Those methods do not make sense for a fixed-sized vector. Another example is the push_back method of the vector class. There are several subtle issues with this method when using a fixed-size vector. In the normal STL the method looks like this:

void push_back(const value_type& val);

In our embedded STL we have two methods:

void push_back(const value_type& val); // 1
value_type& push_back();               // 2

There are two reasons to have these two methods. The first method (1) is the normal STL method. It makes a copy of the parameter and adds it to the vector. But that might not always work. In embedded C++ programming a lot of objects are uncopyable. Copying objects can lead to shared pointers or confusing object ownership problems. This is why we have the second (2) method. We return a reference to the underlying object and can then set the object to the value we want.

As a reminder, the vector’s size is fixed. We cannot increase the size of the vector. What should happen if push_back is called on a full vector? In the normal STL the vector would just dynamically increase the size which would mean that new memory is allocated and the vector data is copied and the old data is deleted. These are features that are not possible in an embedded environment. In our ESTL the methods are implemented like this:

void push_back(const value_type& val)
  // if we are not full
    // copy the item
    data[size++] = val;

value_type& push_back()
  // if we are full then we need to fail!
  // return a reference to the underlying item
  return data[size++];

We have made a decision to assert if the program calls push_back when the vector is full. This might not always be the best behavior but it was the only way to enforce safe C++ code. We felt that it would be more dangerous to return a reference to random data that might cause a crash. With the assert we would immediately know that we have done something wrong.


Using the STL in an embedded environment was previously not allowed due to memory management limitations. With the new E.S.R. Labs ESTL we have made it possible for developers to use STL-like data structures in their embedded applications.

Interested? Check it out on GitHub.

A good discussion can be found on Reddit.

C++ Trait Mechanics — Part 1

| Stefanos | Comments 2

In this post, I am going to demonstrate, how you can use C++ traits to produce more robust, maintainable and optimized code. Recently I inherited a legacy code base responsible for reading and delegating signals from various hardware buttons. After the signals were read, their values were used to update LEDs and inform other ECUs about the state of the buttons. However, we noticed that for such a limited functionality the code was consuming quite a lot of ROM memory and CPU time. Our benchmarks revealed that the code below was executed every 10 ms and was written in plain C. The read functionality was implemented like this:

void C_Implementation::readButton(unsigned const id)
    switch (id)
    case BUTTON_ID_1:
        _buttonOne._signalX = networkBus::getButtonOneSignalX();
        _buttonOne._signalY = networkBus::getButtonOneSignalY();
        _buttonOne._signalZ = networkBus::getButtonOneSignalZ();
    case BUTTON_ID_2:
        _buttonTwo._signalX = networkBus::getButtonTwoSignalX();
        _buttonTwo._signalY = networkBus::getButtonTwoSignalY();
        _buttonTwo._signalZ = networkBus::getButtonTwoSignalZ();
    case BUTTON_ID_3:
        _buttonThree._signalX = networkBus::getButtonThreeSignalX();
        _buttonThree._signalY = networkBus::getButtonThreeSignalY();
        _buttonThree._signalZ = networkBus::getButtonThreeSignalZ();
        _buttonFour._signalX = networkBus::getButtonFourSignalX();
        _buttonFour._signalY = networkBus::getButtonFourSignalY();

Figure 1.

Where _buttonOne etc. are of following types:

struct ButtonTypeOne {
    unsigned char _signalX;
    unsigned short _signalY;
    unsigned char _signalZ;

struct ButtonTypeTwo{
    unsigned char _signalX;
    unsigned char _signalY;
    bool _signalZ;

struct ButtonTypeThree{
    unsigned short _signalX;
    signed char _signalY;

Figure 2.

Here we found our main bottleneck: We had about 15 such functions in the code, all differentiating between the buttons via switch/if/else statements. In addition to the performance problem, the code itself was not really maintainable. Adding or removing a single button requires changing all these functions, which is both tedious and error prone. This is also a violation of the open closed principle. Finally let’s assume for this blog’s purpose that the output functionality looked like this:

void C_Implementation::updateButtonLeds(unsigned const id)
    switch (id)
    case BUTTON_ID_1:
        std::cout << "Button one output :\n"
            << "Signal X is : " << _buttonOne._signalX << "\n"
            << "Signal Y is : " << _buttonOne._signalY << "\n"
            << "Signal Z is : " << _buttonOne._signalZ << "\n";
    case BUTTON_ID_2:
        std::cout << "Button two output :\n"
            << "Signal X is : " << _buttonTwo._signalX << "\n"
            << "Signal Y is : " << _buttonTwo._signalY << "\n"
            << "Signal Z is : " << _buttonTwo._signalZ + _buttonTwo._signalY<< "\n";
    case BUTTON_ID_3:
        std::cout << "Button three output :\n"
            << "Signal X is : " << _buttonThree._signalX << "\n"
            << "Signal Y is : " << _buttonThree._signalY << "\n"
            << "Signal Z is : " << _buttonThree._signalZ + _buttonThree._signalY << "\n";
        std::cout << "Button four output :\n"
            << "Signal X output is : " <<  _buttonFour._signalX << "\n"
            << "Signal Y output is : " <<  _buttonFour._signalY << "\n";

Figure 3.

You should note that in the code in Figure 3 for two of the buttons (BUTTON_ID_2, BUTTON_ID_3) the signal Z is a sum of other signals. For the other two buttons this is not the case. This is another problem with the above implementation. We will see an alternative implementation later on. So lets try and tackle those problems. In order to improve the performance, we can start by removing all these switch and if statements. A common approach is to introduce a base abstract class declaring two functions, readButton and updateButtonLeds. Since we do not want to repeat ourselves, we could provide a template argument to that class, the button type, and have concrete classes implement the correct functionality.

Considering Figure 2, this approach would look like this:

template<class ButtonType>    
class Base
    virtual ~Base(){}

    virtual void readButton() = 0;

    virtual void updateButtonLeds() = 0;
    ButtonType _button;

Figure 4.

So that would be our abstract class, where ButtonType is any of the structs presented in Figure 2. A concrete implementation might look like this:

class ConcreteOne : public Base<ButtonTypeOne>
    virtual void readButton();
    virtual void updateButtonLeds();

void ConcreteOne::readButton()
    _button._signalX = networkBus::getButtonOneSignalX();
    _button._signalY = networkBus::getButtonOneSignalY();
    _button._signalZ = networkBus::getButtonOneSignalZ();

void ConcreteOne::updateButtonLeds()
    std::cout << "Button one output :\n"
              << "Signal X output is : " << _button._signalX << "\n"
              << "Signal Y output is : " << _button._signalY << "\n"
              << "Signal Z output is : " << _button._signalZ << "\n";

Figure 5.

Using this approach we were able to remove all switch/if/else statements from our client code, which now looks like this:

void Classic_Polymorphism::Test()


Figure 6.

Unfortunately, the problem is now is that we need four concrete types for each of the four button types, because the button’s behaviors are similar but not identical. Additionally we introduced virtual tables and we still depend on explicit knowledge of the signal types. Adding virtual tables will certainly not help with our performance issues in comparison with the original C code. So we may want to reconsider our strategy here.

What if we could abstract our button objects in such a way that the client of these classes would have to know nothing about the signal types, thus removing the casts, while also providing type safety and flexibility for future implementations? How about if we were to also remove the virtual functions all together? Is this possible? By using traits it is. But what are traits?

In short, traits are important because they allow you to make compile-time decisions based on types, much as you would make runtime decisions based on values. Better still, by adding the proverbial “extra level of indirection” that solves many engineering problems, traits let you take the type decisions out of the immediate context where they are made. This makes the resulting code cleaner, more readable and easier to maintain. If you apply traits correctly, you get these advantages without paying the cost in performance, safety, or coupling that other solutions may exact.

So let’s start our next attempt to tackle this problem. We will begin by encapsulating all the networkBus functions mentioned before into function objects, also known as functors.

template< class Signal, Signal(*FUNC)()>
struct SignalWrapper
    typedef Signal SignalType;
    inline Signal operator()()const
        return FUNC();

Figure 7.

//typedefs for all available functions
typedef SignalWrapper<unsigned char, &networkBus::getButtonOneSignalX> ButtonOneSignalX;
typedef SignalWrapper<unsigned short, &networkBus::getButtonOneSignalY> ButtonOneSignalY;
typedef SignalWrapper<unsigned char, &networkBus::getButtonOneSignalZ> ButtonOneSignalZ;
typedef SignalWrapper<unsigned char, &networkBus::getButtonTwoSignalX> ButtonTwoSignalX;
typedef SignalWrapper<unsigned short, &networkBus::getButtonTwoSignalY> ButtonTwoSignalY;
typedef SignalWrapper<unsigned char, &networkBus::getButtonTwoSignalZ> ButtonTwoSignalZ;
typedef SignalWrapper<unsigned char, &networkBus::getButtonThreeSignalX> ButtonThreeSignalX;
typedef SignalWrapper<unsigned char, &networkBus::getButtonThreeSignalY> ButtonThreeSignalY;
typedef SignalWrapper<bool, &networkBus::getButtonThreeSignalZ> ButtonThreeSignalZ;
typedef SignalWrapper<unsigned short, &networkBus::getButtonFourSignalX> ButtonFourSignalX;
typedef SignalWrapper<signed char, &networkBus::getButtonFourSignalY> ButtonFourSignalY;

Figure 8.

While the previous step is purely cosmetic, it really reduces repetition and thus errors later on in the code. Our next step is to provide trait classes for the actual switches. If you remember Figure 2. there were three kind of buttons. Or were they only two? If we look at these classes again and abstract the different signal types, we realize that we only need two type of switch trait classes. One with two signals and one with three. Moreover we can have a base trait class which provides all the information we need for two signals and an extension of it for the additional third signal. So we come up with the following set of traits:

template<class SignalOneReader, class SignalTwoReader, int ID>
struct CommonButtonTraits
    typedef SignalOneReader ReadSignalOne;
    typedef SignalTwoReader ReadSignalTwo;

    typedef typename SignalOneReader::SignalType SignalOneType;
    typedef typename SignalTwoReader::SignalType SignalTwoType;
    static const int BUTTON_ID = ID;

template <class SignalOneReader, class SignalTwoReader, class SignalThreeReader, int ID>
struct ExtendedButtonTraits : public CommonButtonTraits<SignalOneReader, SignalTwoReader, ID>
    typedef SignalThreeReader ReadSignalThree;
    typedef typename SignalThreeReader::SignalType SignalThreeType;
typedef ExtendedButtonTraits<ButtonOneSignalX, ButtonOneSignalY, ButtonOneSignalZ,1> ButtonOneTraits;
typedef ExtendedButtonTraits<ButtonTwoSignalX, ButtonTwoSignalY, ButtonTwoSignalZ,2> ButtonTwoTraits;
typedef ExtendedButtonTraits<ButtonThreeSignalX, ButtonThreeSignalY, ButtonThreeSignalZ,3> ButtonThreeTraits;
typedef CommonButtonTraits<ButtonFourSignalX, ButtonFourSignalY,4> ButtonFourTraits;

Figure 9.

Note that we also provided type definitions for all the different buttons. We have now managed to encapsulate all information about signals into traits classes. Classes which use these traits do not need to know about the specifics of a signal type, or which function to call to actually read the signal. In addition, should a signal type change, the clients of the traits classes do not need to change a bit since they don’t contain any hardcoded information about the signal itself, rather they deduce the type from the template parameters themselves. Now we are ready for our final step. The button classes. Since we realized that we have two types of traits, we can also apply this to our actual button classes. So we can implement a base button class which uses two signals and an extended button class which is a base button class with an additional signal. We came up with this for the base class:

template<class ButtonTraits>
class ButtonBase
    void read()
        _signalX = typename ButtonTraits::ReadSignalOne()();
        _signalY = typename ButtonTraits::ReadSignalTwo()();

    void update()
        std::cout << "Button " << ButtonTraits::BUTTON_ID << "\n"
            << "Signal X is : " << _signalX << "\n"
            << "Signal Y is : " << _signalY << "\n";
    typename ButtonTraits::SignalOneType _signalX;
    typename ButtonTraits::SignalTwoType _signalY;

Figure 10.

Pretty straightforward. And now on to the extended class:

template<class ButtonTraits, class EnableSignalAddition = void >
class ExtendedButton : public ButtonBase<ButtonTraits>
    void read()
        _signal = typename ButtonTraits::ReadSignalThree()();

    void update()
        std::cout << "Signal Z output is : " << _signal << "\n";
    typename ButtonTraits::SignalThreeType _signal;

Figure 11.

As you may recall, our output function for signal Z was different for buttons with id 2 and 3. We will tackle this problem in a future blog entry.

Following key points are to be noted:

  • No virtual methods.
  • No explicit hardcoded types about the signals. Instead the type is deduced from the respective traits.
  • Update functions are specialized for every type needed, to avoid code generation completely where it is not needed.

Imagine that a random signal X changes from bool to unsigned char. The classes in Figure 11 do not have to be modified at all. Only the typedef in Figure 8 would have to change and that’s it! We are ready to go! In the original implementation one would have to change the code in the client classes. We also saved CPU runtime due to the removal of

  • if/else/switch statements
  • Using vtables

and finally think about how flexible and at the same time type safe this design is in comparison with the original approaches. In the next blog we ‘ll talk about the enable_if idiom and code generation. So stay tuned.


Micro Patterns

| Nico

When I started working for E.S.R.Labs I was pretty new to these whole embedded C/C++ stuff, therefore I had and still have a lot to learn. Learning something is great. As you progress you understand more and more and grasp the intent or ideas of other programmers. Sometimes you even recognize a great idea in a piece of code which seemed to be broken in the first place.

Quite often I discovered that a piece of apparently ugly code or pre-processor construct which seemed inherently broken on first sight was perfectly fine and even clever. Most of the time my first impression was “oh thats damn bad code! That programmer hadn’t much of an idea what good code looks like!”
As I progressed and learned more about the details, the hidden advantages of some constructs were revealed. To be clear, not all of that obfuscated code constructs I discovered turned out to have hidden powers, some really just were bad code! But from others I could learn.

So I tried to think about a good idea how to share my knowledge about some of this tricky constructs. Just like any other programmer I tend to be lazy, so I searched for an existing concept on how to share these ideas. Long story short, I decided to use the basic concept of design patterns. Most software developers know about basic design patterns, or at least they know how their descriptions need to be interpreted. Therefore I will use a description similar to design pattern which I’ll call mirco patterns.

So I’ll present you the first micro pattern in an hopefully long series.

The idea behind the NAM-Pattern, by the way, can also be used for bitfields.

The NAM Pattern


A basic understanding of unions, structs and arrays.


NAM stands for Named Array Members, which gives us a basic idea what this pattern is about. It provides a name for each value of an array or pointer during debugging, without losing the properties of the array (like looping and indexing in a for loop). In order to implement the NAM pattern the array/pointer is wrapped in an union together with an struct. The struct which is also contained in the union will be used to provide a name for each value of the array/pointer (see Example). The first member name in the struct then will be mapped to the first index of the array/pointer element the second member name to the second index and so forth.

When to use:

Ease debugging by providing names for array members.


  • All members can be addressed in a for loop over the array no code duplication for addressing each member.
  • Ease debugging, while debugging code you have an associated name for each member of the array.


  • Each time the array size is adjusted the associated struct has to be updated.
  • The size of debug builds will increase because of the additional information which have to be stored for this construct.

Tested with:

  • gcc
  • diab
  • clang


    uint8_t duty_left_head_light;
    uint8_t duty_right_head_light;
    uint8_t duty_left_brake_light;
    uint8_t duty_right_brake_light;
  } DbgNameResolutionStruct;
  uint8_t duty[4];
} LampDuties;
for (int index = 0; index < 4; ++index)
   // assing duty's 
   LampDuties.duty[index] = index + 10;


Breakpoint 2 at 0x100000f25: file main.cpp, line 26.
(gdb) run
Starting program: /Users/NiCoretti/Projects/nam_pattern/a.out 
Reading symbols for shared libraries +.............................. done
Breakpoint 2, main (argc=1, argv=0x7fff5fbff7c8) at main.cpp:26
(gdb) print LampDuties
$1 = {
  DbgNameResolutionStruct = {
  duty_left_head_light = 10 '\n',
  duty_right_head_light = 11 '\v',
  duty_left_brake_light = 12 '\f',
  duty_right_brake_light = 13 '\r'
  duty = "\n\v\f\r"

Subverting Subversion or How to Set up a Git-allic Village

| Ralf Holly

We certainly live in privileged times. Gone are the days were we had only closed source, proprietary software at our disposal. Today, thanks to free and open-source software, we have access to an abundance of high-quality tools — tools, that we can even tweak until they fit our personal needs.

Virtual machines give us access to our beloved development environment even on alien host operating systems. No matter if I have to work on a Windows platform, Cygwin makes it bearable and gives me that certain Linux feeling.

Read on


| Alexander | Comments 1

Have you ever worked in large projects with many compilation units? Were you satisfied by the build system?

We have recently worked in a project with over 100 developers in three countries and the build system was a pain. It was really slow. It was based on Eclipse and it was not possible to build the workspace from command line. Editing and merging the configuration files was a mess. We did some research and tried some tools available on the market, but all of them had one or more drawbacks, so we decided to develop our own tool based on cxxproject, which is written in Ruby. Now we are working on the successor of this tool. It’s available under the name bake:

  • It’s an easy to learn C/C++ build system.
  • The syntax of the configuration files is short, human readable and easy to understand.
  • It is fast.
  • It builds on command line and can be smoothly integrated into IDEs. Plugins for Eclipse and Visual Studio are already available.
  • bake is from developers for developers.

Read on

C++ Knowledge from the Experts

| Oliver | Comments 4

the fun is over
For a long time C++ has been one of our main development languages. And it seems most of the automotive industry finally agrees that this is not a bad choice. A couple of years back the situation was quite different and we had to convince a lot of people that C++ code can indeed be as efficient as plain old C. But C++ is also changing quite a lot thanks to the new ISO standard C++11.

Read on

Join the Global Day of Coderetreat @ E.S.R.Labs

| Sebastian

Last year, over 1800 passionate software developers in 94 cities around the world spent a full day practicing the craft of software development using the coderetreat format. This year on December 8th it is time for another global day of coderetreat! We are happy to announce that E.S.R. Labs will host a coderetreat here in Munich on December 8th! gdcr12 Feel free to participate, if you are a passionate about software development and keen to spend a fun day together with other programmers. The coderetreat will be facilitated by Sebastian Benz, an experienced facilitator, who also held last year’s coderetreat here in Munich.

Read on

Android Transporter for the Nexus 7 and the Raspberry Pi

| Daniel | Comments 175

The Android Transporter allows you to share the display contents of your Nexus 7 tablet wirelessly with other screens in real time. Now, the first tech demo of the Android Transporter is out!

Introduction The Android Transporter allows you to share display content wirelessly with remote screens in real time. Please be aware that the Transporter is still a technology study and it is missing the maturity of a full-featured product. However, we think that the Android Transporter is already exciting enough to let you play around with it. We believe that with the recently released

Miracast standard you will get a very similar technology in upcoming Android devices, and we are considering making the Transporter compliant with the Miracast specs. The Android Transporter is a custom ROM and not an app since we had to make adjustemts to various parts of the Android platform to make it happen. Be aware that you use the Android Transporter at your own risk and that you will void your tablet’s warranty by following the setup instructions below. Moreover, you are not allowed to bundle further Google apps like the Google Play Store with our firmware image. Read on for the quick start guide and some Android Transporter internals.

Read on