OPENCOSS (Open Platform for EvolutioNary Certification of Safety-critical Systems) is a recently approved R&D project. The project 17 partners aim at a common certification framework that spans different vertical markets for the railway, avionics and automotive industry, and establish a common safety certification infrastructure. The strategy is to focus on a compositional and evolutionary certification approach with the capability to reuse safety arguments, safety evidence, and contextual information about system components, in a way that makes certification more cost-effective, precise, and scalable. OPENCOSS will define a common certification language by unifying the requirements and terminology of different industries and building a common approach to certification activities.
OPENCOSS aims at developing a tool infrastructure for managing certification information and performing safety assurance activities. Within this infrastructure, systematic and auditable processes will be developed to reduce uncertainty and (re)certification costs. To have long-lasting industrial impact, the project will pursue standardisation of the conceptual framework and the tool infrastructure resulting from the project.
More information will be made available at a later stage.
While published by a scientific publisher, this book is not a purely scientific one. But it shows how the state of the art in science can be applied to a real industrial development with great benefit. It documents (incompletely but sufficiently) the journey of the OpenComRTOS project. This project started out with the goal to see how we could apply formal methods to embedded software development. And because we had a background in a distributed Real-Time Operating System, we decided to use the design from scratch as a target. Not a trivial one as it covers concurrency, protocols, local as well distributed state machines as well as boundary conditions of efficiency, hard real-time capability, scalability and other non-functional requirements. An RTOS is however a suitable and grateful target as it is the key layer between hardware and application software.
Altreonic is now announcing a port of OpenComRTOS to the high performance C66xx DSP of Texas Instruments and integrating it in the OpenComRTOS Designer environment.
Full OpenComRTOS fits comfortably in L1 cache
A full kernel with all services only requires between 5.1 to 7.7 KBytes for program memory depending on the compile time options and services used. This was measured by compiling a minimal application for a C6670 target with program placement in L2 SRAM comparing the results using a mapfile analyser. Nevertheless, this is still a complete priority based preemptive scheduling RTOS with support for distributed priority inheritance. Besides task scheduling, services provided are: events, semaphores, resources, port hubs, fifos, packet and memory pools in blocking, non blocking, blocking with timeout and asynchronous semantics. OpenComRTOS transparently supports single as well as large multiprocessor systems.
OpenComRTOS is the most efficient and easy-to-use solution for high performance embedded parallel computing. Porting to the TI DSP has been swift and efficient.
Altreonic is now announcing a port of OpenComRTOS to the high performance PowerPC processors of Freescale and integrating it in the OpenComRTOS Designer environment.
A full kernel with all services only requires between 7.1 to 9.8 KBytes for program memory and less than 6 KBytes of data memory, depending on the compile time options and services used. This was measured by compiling a minimal application for an e600 target with Altivec support and comparing the results using a mapfile analyser. Nevertheless, this is still a complete priority based preemptive scheduling RTOS with support for distributed priority inheritance.
Die Fachmedien Elektronik und Design&Elektronik veranstalten am 5. Juli 2011 die große Konferenz für ARM-Systementwicklung.
Je mehr die »ARM-Cortex«-Architektur den Markt erobert, desto wichtiger wird detailliertes Cortex-Fachwissen für Entwickler. Die große Konferenz für ARM-Systementwicklung bietet die Gelegenheit, sich schnell und effizient mit der »ARM-Cortex«-Architektur und ihrem Ökosystem an Bausteinen und Softwaretools vertraut zu machen. Das Programm beleuchtet alle wichtigen Aspekte und wendet sich gleichermaßen an Ein- und Umsteiger wie auch bereits erfahrene Cortex-Anwender. Ausgezeichnete Referenten mit viel Erfahrung im Umgang mit ARM-Bausteinen präsentieren anwendungsbereites Know-how.
Dr. Bernhard Sputh, Altreonic
"Open" is one of those words that is used a lot. It came into being as a reaction to the closed software offered by many software vendors. Since then Open and Free have become intermingled although it is not by lack of variants of open source licensing schemes. So why did we create another "Open Licensing" scheme?
DSP Valley has participated once again with a delegation of its members (Altreonic, Ansem, Byte Paradigm, NXP, Target Compilers) on the FIT (Flanders Investment & Trade organization) booth at the ESEC show in Tokyo. Given the dramatic earth quake and tsunami 2 months earlier, show attendance was somehow lower than the previous year but that was barely visible looking at the constant flow of visitors and did not result in less contacts.
Altreonic is engaged in a project thinking about (very) long life electronic devices. Practically speaking, long live means either they last a life time, either the device is used in environmental conditions subjecting it to accelerated ageing often due to agressive stress (vibration, chemical, radiation).
The project aims at researching novel approaches to develop resilient embedded programmable semiconductor devices with a very long life time, set at a symbolic goal of 100 years, even if some domains like bio-medical have already requirements spanning a lifetime of 70 to 80 years. A second domain is made up by applications where the devices are subjected to higher than normal stress resulting in a higher probability of failure. A third domain is focused on develiping consumer devices that have a much longer useful life than it is the case today.
This requires a holistic systems engineering approach, covering multiple domains such as system architecture, mechanical design, electro-chemical behaviour, software architecture, semiconductor technology and architecture, energy harvesting and non-functional aspects like long term reuse and maintainability. Application domains are diverse: infrastructure, transport systems, energy grids, bio-medical, consumer devices, aerospace and many other. The main goal is to make a significant step towards developing resilient high reliability devices in a cost-efficient way. We expect benefits that apply equally well to the current day engineering end development of high reliability systems, especially as the shrinking of the semiconductor elements is gradually eroding the robustness margins so that even at shorter lifetimes reliability becomes an issue.
The project will benefit from analysing typical use cases and applications where this is case. If interested to be part of the project or its user group, please contact us at long.live (@) altreonic.com
A recent article in EE-Times Europe states that computing has hit a power wall. Indeed, chip designers spoiled programmers in the past with ever increasing amount of compute cycles and memory space to waste. This has lead to great new features, which we all would like to keep, however the way we program these hardware monsters has not really changed. Yes compilers have become better in optimising code, but everything after has stayed the same. The linking phase of C / C++ programs is still largely a brute force operation, including everything the program might need and very often code that never will be executed. This leads to enormously bloated programs, that have to be a) stored in non-volatile storage, and b) in the RAM of the system that execute them. A simple “Hello World” might need a few Mbytes and links in 10000’s of functions. While this is less of a issue in desktop type systems, due to them having ample of cheap (D)RAM available and reasonably sized caches, the latter is not true for embedded systems, which represent the ever growing bulk of computer driven systems on the planet. Needing a lot of (D)RAM does not only cost money, but also energy, because (D)RAM needs to be continuously refreshed and operates often with 100’s of wait states compared with the superfast GHz CPUs. Thus this becomes part of the power wall we are currently hitting. And to follow Moore’s law, the only way forward is more parallel processing cores on the same die, even if that doesn’t increase the access speed to the external (D)RAM. In the end, chips are pin-bound. Performance is cache bound on such chips and therefore code size still matters.