You are hereFrequently Asked Questions
Frequently Asked Questions
What does Altreonic mean with "Trustworthy forever"?
The economic crisis has made one thing obvious: the future car will be electric. Besides the ecological benefits, it will also allow us to enter a new era in mobility. Electric cars will be cheaper, faster and quieter. They will have all kind of sensors, cameras and radars, on all sides to actively prevent accidents. They will likely also communicate with each other and with side-road micro-basestations improving traffic throughput, safer driving and enabling cars to drive autonomously. Electronics and software will have made the drive-by-wire goal a reality. The engineers will have one major design goal: make it reliable and fault tolerant as safety and liabilities go hand in hand.
Sitting in the car will be a family. Each will have a next generation personal "netputer". It not only communicates, it runs all kind of applications on its internal 64-core multicore processor. Dynamic QoS for the applications is assured by a next generation multicore operating system. No more blue screens as the users depend on the correct functioning of their devices, i.e. the "netputers" will have become a safety critical support system for their lifestyle.
Does this all sound like Star Trek? It shouldn't. The future is happening now and Altreonic is part of it. Based on work done previously at the Open License Society research institute, Altreonic delivers a formalised systems and software engineering methodology supported by tools. OpenCookbook is a web portal for a formalised requirements and specifications capturing process. It supports projects from early concept until the implementation is ready for release to production. OpenVE is a visual modelling and simulation environment to define the application. OpenComRTOS is the system software. It was developed using formal methods and delivers a transparent programming environment for network centric embedded real-time systems, independent of whether it runs on a multi-core chips or a platform which is physically distributed over 1000's of km. The formal development, unique for this kind of software gave it unmatched scalability and safety, but also a very small code size (typically between 5 to 10 KiBytes per node). It can even support heterogeneous systems at the click of the mouse not to speak of the fact that fault tolerance is now a lot easier to achieve. When the code is running, OpenTracer allows to visually profile the application and verify that it runs as specified.
What means "Trustworthy"?
Today we live in a world in which a book can last a 100 years before acid eats away the paper, yet a DVD that can contain a 1000 books, might only last 10 years. A single bit error and a whole file can be lost. Now we fly and drive by wire. While 5O years ago a car could be repaired with a hammer and a rope, a car can now be stopped instantly because a cosmic particle burns a tiny hole in one of the chips. Or maybe the alternator had a hick-up or you passed a badly grounded high power line. Still, we put more and more chips with software everywhere. For the simple reason that these tiny devices allow us to do more with less energy and with more flexibility, ... when they work.
The context: Architecting Trustworthy Systems
While we are familiar with the fact that we live in a consumer society where our gadget devices last from a few months to at most a few years, electronics are now also increasingly being used to replace and enhance mechanical counterparts not just in novel throw-away gadgets but also in our surrounding environment. The car we drive is not just a gadget but an essential mobility device. So are trains and airplanes. Everywhere we have networked sensors that feed data to embedded computers to better make use of energy and to enhance our safety and security. Our homes are becoming “smart”. Yet what can we do when one of these embedded devices fails? What can we do when the battery is dead or main power is lost?
There are other reasons as well to have concerns. These embedded devices are increasingly flexible because they run embedded software. Flexible software is complex and developing it is an error-prone process. But when an unanticipated error occurs, the result can be disastrous. Nevertheless, the increasing use of programmable embedded hardware is unstoppable. The more semiconductor technology advances, the more we can do with it at a lower cost. But we are confronted here with a triple challenge:
- The complexity of the software increases exponentially.
- The shrinking chip features make them rapidly inherently less reliable.
- Their dropping cost price and flexibility accelerates their deployment everywhere.
Yet, they need energy to function and a single bit level error or fault can lead to the total loss of their functionality. The challenge in particular is that contrary to their mechanical counterparts, these devices fail catastrophically and repair is often not economical. Hence, we can summarise the challenge as follows:
- How can we make embedded devices as reliable as their mechanical predecessors?
- Having the property of graceful degradation/adaptation rather than to fail instantly?
- Having the property of remaining functional when no energy is available?
- Having enough robustness margins to last for a long time?
There are other aspects as well. Even if a system is very reliable and very safe, it can fail to meet expectations for other reasons:
- Security: faults can maliciously be introduced. Software virusses and worms are not just major headaches anymore for PCs. Increasingly our embedded devices become victims as well (e.g. the WinCE based StuxNet worm). If such intrusions are prebented by design, then the user can trust his system.
- Usability: this is the domain where the human side of things is most prominent. User interaction must be intuitive, fast and avoid confusion and misinterpretation; then the user can trust his system.
- Privacy: embedded devices ranging from our credit card to toll stations are increasingly keeping our personla data. Personal data that we want to keep private for various reasons. When we know that our data is safeguarded, the we can trust the system.
Trustworthy everywhere might sound like a clever marketing slogan, but at Altreonic we make it happen. They key to it is combining a long experience with a formalised approach. This allows us to make systems smaller, more reliable and thus better. If needed, formal techniques are used to mathematically prove the solution is right. It is a formalised process that works because the human side has been taken into account. Unified semantics is hence one of the expressions you will often hear at Altreonic. The result is that safety engineering is now within reach of small and medium sized companies.
What else can you expect from Altreonic? Besides being your partner, unified semantics also mean that software and hardware are co-designed as a system. This is reflected in safety standards like IEC61508 in which traceability and configuration management are keywords.
Why systems engineering?
Engineering is about providing solutions that work in an efficient way. It is also about developing these solutions to deliver maximum utility for a minimum of resources. It is also about looking at the the solution as a system. A system means that all composing parts are considered to work harmoniously together. A system also means looking at the whole lifecycle of the system. As such systems engineering is more than a collection of skills and tools. It is holistic thinking in which the way things are done is as important as which things are done. It's about following a formalised process that still respects human creativity. More and more systems are safety critical or must be dependable. Lifes can be at stake. Formalisation is needed to make the right thing in the right way. Even when mathematics are used - as a tool, not as a goal - engineering is first of all teamwork. Engineering is done by humans working together to achieve what hasn't been done before.
Why software engineering?
Software is increasingly an important part of any system. It can not really be separated from the system it is part of. Nevertheless, there are two main reasons to consider it as a separate domain. First of all, software allows to change the function of the system, even while the system is in use. Secondly, the state space of software programs is almost infinite. Software is complex. On the other hand, correctness of software comes from proving it, not just testing. Software has no bugs, it can only have errors that were introduced during the design and implementation. Hence software will only work well when the whole system was developed to the best of systems engineering methodologies.
Why designing for trust?
If a system does what it is supposed to be doing, then a first goal is reached. But given the complexity of today's embedded systems, and given the fact that nothng is really perfect, how can we guarantee that the system will provide its functionality at all time? If it doesn't, people might even get hurt. Therefore it pays to develop for high reliability from the very beginning. When dependability is part of the design, it can actually make it cheaper. By thinking up front, issues will be detected and corrected in an early stage when making changes is still mostly a matter of writing it down. When issues must be corrected when the system is in production however, the cost can be very high.
Why Formal Methods?
Engineering differs from (pure) art in the sense that engineers use logic and mathematics to make creative ideas actually work as intended. Bridges and buildings are calculated so that they will last while teh design can be elegant as well. Hence, formal methods (read: the use of tools and methods based on logic and mathematics) are to be used from the beginning to support the architectural process. Often, the architecture will be more elegant, simpler and more performing. Verification then becomes simple, because the architecture reflects an understanding of the problem domain. The European Commision is tinkering with a directive that would make software developers reponsible if the software is proven to be the root cause when the system fails. While the last word hasn't been said on this (software is part of the system), it puts things in the right perspective.
What are the benefits?
Developing projects is often expensive and requires many resources. Research has found that when projects fail, often the specifications were wrong or that the requirements were incomplete or not consistent. This is were a formalised approach helps because it reduces such up-front errors. The benefit is often to be found when the product is almost ready. While less formalised approaches often result in high "debugging" cost that can even result in run-away cost, formalised approaches cost less by finding the issues up-front. The result is lower and predictable project costs and hence less maintenance, support and warranty costs. The major benefit is more dependability.
What makes OpenComRTOS different?
OpenComRTOS was developed from the start as part of a formalised systems engineering methodology. It was also developed using formal methods from the start. As a result it is not just an RTOS, but a very efficient and very scalable programming approach to software engineering (especilaly for embedded real-time systems). It allows to model the application, develop a simulator as a virtual prototype and the use the model for the real implementation. As it supports heterogeneous targets, the application can be deployed on almost any target system. Before OpenComRTOS, concurrent programming was hard. OpenComROS makes it natural.
What is meant with Unified Semantics?
Engineering is team work. Which means communication. Hence it is important that everyone speaks the same language. Hence terms must be defined and agreed upon. The same applies to the tools and components ("IP") used to put the system together. It is not enough that the syntax is the same, also the semantics must be the same. Two connectors can match but will not communicate if they speak a different protocol. Hence the need for unified semantics. It saves a lot of effort to avoid the misunderstandings up front.
What is meant with Interacting Entities?
Engineering a system is about translating goals and objectibes into a concrete system. Requiremenst and specifications are fullfilled by concrete parts. A modular as well as scalable way to architecture a system is by defining entities that fullfill the specifications. A special type of entity is an interaction entity. This may sound abstract, but think about. Almost any system can be modeled by indenifying its composing entities and how they interact. Interacting entities is a universal way to describe the architecture of a system. This is alos way a multi-tasking programming system is a key layer in implementing them with software and hardware blocks.
|Altreonic Profile Sep 09 .pdf||2.41 MB|