Text Box: Code Size Matters
If the processor are getting too fast for the memory, how can we avoid the performance bottleneck? 
Algorithms are often concurrent by nature. But they execute on sequential processors. Therefore it helps to decompose the software in smaller concurrent units, each being smaller. Less instructions means less often going to memory and less processing cycles. The result is less code giving more performance.
As the side-effect, one can then distribute the application over multiple processing cores. As each can run slower than as single processor, there is less mismatch with the memory speed and less power will be required.

StarFish© Very High Speed Processing

Embedded systems often have to process real-time data coming from the environment. The amount of data can be massive either by its nature, either by the fact that a large number of channels are sampled.  Also to extract meaningful information (e.g. object recognition) very complex and processing intensive algorithms are needed, often necessitating the use of parallel processing hardware.

This is the domain of embedded supercomputing. This domain is often even more constrained by power and size restricting because the embedded computer is based in a difficult environment.

OpenComRTOS was designed with such boundary conditions in mind.

Text Box: Parallel Software
If the processor are getting too fast for the memory, how can we avoid the performance bottleneck? 
Software is essentially modelling  of systems. Most systems are composed of concurrent sub-systems that interact. Hence, concurrent software is more natural that the large sequential programs we find today. 
In GoedelWorks the user will map his specifications to separate entities. Mapping them to the concurrent tasks of OpenComRTOS Designer is therefore natural. 
The code is easier to maintain, and easier to parallelise, hence providing more performance.