Professor emeritus Charles Anthony Richard Hoare has passed away at the age of 92. Known to many as C. A. R. Hoare, or Tony Hoare, he was one of the great minds in the domain of software engineering. Altreonic's history that has it roots in the late 1980's, was essentially based on Hoare's CSP process algebra. CSP stands for Communicating Sequential Processes. Here's my personal road that followed. Note that I am an engineer, not an academic. An engineering approach is to make something that works in practice but following sound principles that academics have developed.
My initial exposure to software was while studying as an engineer in the domain of electronics and applied mechanics. We had an old IBM-360 and we learned to program in Algol-68 and Fortran. Later, as PCs became affordable, Pascal was added. Meanwhile, Tony Hoare had developed CSP, which later on was adopted as the conceptual framework for a unique processor, called the Transputer (ex-INMOS). At the time it was very different from the mainstream Intel processors. The transputer was a RISC-like CPU with unique support for software and hardware concurrency. It natively supported small processes and inter-processor communication. As it had a small register set and hence context process switching was fast and interprocessor communication had a low latency. For programming the unique occam language was developed. Occam was a direct implementation of a subset of CSP with processes and communication channels.
Just before that time, I had envisioned the concept of a parallel computer (using optical communication). With hindsight, building a parallel computer is easy. What's hard is how to program it in an efficient and scalable way. At that time the focus was on data-parallel programming, think about image processing whereby a large image is split in N chunks and each processor executes the same code on its chunk, providing a theoretical speed-up with a factor N. In reality, "grain size" and "communication latency" determine the unavoidable overhead. The idea was to use the principle of a spreadsheet. References from one cell to another cell can be seen as micro-messages. This worked well and we had the traditional Mandelbrot and equation of Laplace demos.
Around that time, INMOS put its transputer on the market. That was a great opportunity to put the ideas in practice. The development kit had 4 transputers and came with an occm compiler. I bought such an (expensive) kit and that was the start of a long story. First of all, although occam-1 was very simple and even had only 1 datatype (the byte), programming in occam had a steep learning curve to get working parallel programs. The compiler was unforgiving but if you got it compiled, the program would often run flawlessly. The other issue was not to have any deadlocks. After 1 month of error and trial I attended Professor Peter Welch's occam course and on the second day, the mental AHA switch happened. I watched my brain translating the simple exercise specification in "IF-THEN-ELSE" statements and realised that this was wrong. I also realised that this was the consequence beind trained in programming using sequential languages whereby one tries to keep track of the whole state-space. The switch to concurrent thinking is to look for the actions in the specifications and then seeing how they are related. Actions become local processes and relationships become communication channels. Once, that mental switch is made, there is no way back. It is the natural way to analyse problems.
I then learned that this is the essence of Hoare's CSP. I summarise it here:
- No Shared State: Processes in CSP do not share memory; they communicate exclusively through explicit, synchronized channels.
- Synchronous Communication: A process sending a message (channel writer) must wait until another process receives the message (channel reader), providing built-in synchronisation.
- Process Algebra: CSP provides mathematical methods to describe and verify the behaviour of complex systems composed of simpler, smaller processes.
- Formalism: CSP uses mathematical modeling to prevent concurrency errors such as deadlocks, livelocks, and infinite overtaking.
Most importantly Hoare's was one of the pioneers who aimed for a Correctness by Construction: The goal of CSP is to enable the design of (concurrent) systems that are guaranteed to be correct through formal mathematical proof rather than just testing.
While I consider occam almost an obligatory passage for learning concurrency (and how to control it), occam and the transputer were still rather academic ventures. Occam had rather elementary semantics and the transputer did round-robin scheduling with 2 levels of priority. Real-world (embedded) programming requires flexible programming languages and real-time priority based scheduling. Hence, shortly after the C-compiler was introduced we developed a small RTOS with priority based scheduling on the transputer (2001). Moreover, we introduced the concept of "Virtual Single Processor" programming by allowing to call the RTOS services anywhere in a network. Essentially, the channels inherited the semantics of the RTOS (like semaphores, queues and mailboxes) but network-wide routing was also part of the RTOS kernel. Essentially, the RTOS was a scheduler on top of a packet switching network. A major change was that the semantics were made "distributed". No more passing of pointers, but passing by value , exactly as CSP mandates. This became the Virtuoso RTOS that later on was ported to other targets like the parallel DSPs of Texas Instruments (C40, C6xx) , Analog Devices (Sharc) but also microcontrollers, PowerPC and some exotic DSPs. On these targets the communication layer was a lot more complex to implement than on the transputer.
In 2001, Wind River Systems acquired the IP but never managed to turn it into a successful product. I blame it on the shared-memory syndrome. In the end it was open-sourced and now lives on (single processor) with a more Posix-like API under the name of Zephyr.
After this event, I increasingly started thinking about how one could develop correct by design software. Even if the Virtuoso RTOS was subjected to rather demanding Monte-Carlo type stress tests before release (DSPs in particular have a lot of concurrency in the hardware), sometimes an issue would still pop-up years later for the simple reason that the user programmed his application in an unexpected way. This is one of the reasons why we kept the service names very readable, expressing what they did. One example that we encountered was a memcpy whereby the data could be overwritten if the originator did it too fast. The memcpy was implemented with DMA. DMA engines operate independently from the CPU and need some time to copy data. Hence the service became memcpy_W, a blocking call that waited for the DMA to have copied the last byte. On a sequential processor this is not an issue because the copying is done by the CPU in a sequential loop. As a result, all services were developed in a blocking (_W), non-blocking (_NW) and blocking with time-out version (_WT).
Out of these experiences, we started to look at developing a new Virtuoso from scratch but this time using formal techniques. This was completely new for us and it was not so easy to figure out if this was the path to follow. Formal techniques were (and still are) very much a research topic, only applied in very specific cases. And being very mathematical in nature, formal techniques are often applied by PhD level experts. I remember a lecture by professor Patrick Cousot whereby he showed how abstract interpretation was able to find an overflow in a loop that could happen after 300 hours in a formally proven flight software. I was flabbergasted but this is one of the experiences that convinced me to try the formal path.
But how? And with what tools? The formal community seemed to have two approaches. The first is the abstract interpretation path, whereby axioma's are constructed and applied to a piece of software. The engineer then has to develop a proof that the software has no residual errors. The other approach is formal modeling whereby a formal model is developed and a model checker verifies if the model can't reach an illegal state. The abstract interpretation road didn't seem to be the one to take. Not only were exceptional skills required, it assumes the existence of a software to be be checked. Formal modeling seemed to be the best approach as our goal was to redevelop the RTOS from scratch. Guiding lines were our experience: a CSP derived program model as implemented in the Virtuoso RTOS, distributed semantics, prority-based system-wide scheduling and correct by design.
Two options remained: Spin and Leslie Lamport's TLA+. Professor Boute was our coach and TLA+ was selected. To make a long story short (the project took 3 years but was not a full-time activity), while there was a steep learning curve, we ended up doing things a bit differently. TLA+ was used to model and verify the architecture and the semantics, not the source code. That would have been a mistake as software sources evolves. The result was an RTOS kernel that implemented the concept of "Interacting Entities". The entities were mainly "Tasks" (processes in CSP) and the interactions the kernel services connecting the Tasks through intermediate" so-called "Hubs" (channels in CSP) but with much richer semantics, including asynchronous communication. The hubs and the packet switching also played a crucial role in the new architecture. The benefits were scalability, efficiency and maintainability. Unexpectedly, the code size shrank considerably, about a factor 10. The whole kernel now fitted in about 10 kb (this is very much processor dependent). The project was called "OpenComRTOS" and today lives on under the name VirtuosoNext. The current implementation supports fine-grain space partitioning and even allows to recover in micro-seconds from faults like overflow exceptions. The project was documented in a book (Formal Development of a network-Centric RTOS published by Springer).
The project also opened a road towards trustworty system design. It enhanced the capability to think in abstract terms about systems. In a small electric vehicle, the "Interacting Entities" concept was implemented in the drive-by-wire software of the 4 independent wheels. In the GoedelWorks project we sought to achieve "correct by construction" for systems using meta-modeling of the project entities but also of the development processes followed. A key element was the dependency graph that allowed to keep track of the impact of changes. Finally, the ARRL (Assured Reliability and Resilience Level) concept we introduced the concept that the ultimate goal of trustworthy systems engineering is not only to be fault-tolerant, secure or safe but to survive, being able to function as intended whatever what happens to it.
There are multiple lessons that can be drawn from this long road. First of all, formal approaches are a great way to rethink known solutions. Why, because they are a tool helping the mind to think in a more abstract way. Known solutions are what prevent progress as they define how people look at things. It also helps to think out of the box questioning established practices. Of course, formal techniques are hard to use (certainly in the beginning) and there is the scalability limit. The way to overcome that is to move to the meta-levels, abstracting away the details. Formalisation is the key. To illustrate this with a small example, in the RTOS a priority-sorted linked list is used everywhere to implement waiting semantics. The first TLA+ models (influenced by the existing source code) also modeled the linked list. The result was that the model checker would not handle the state space explosion. One needs a lot of memory for that. Then it dawned on us that for the semantics of the services, the priority didn't matter and we just replaced it with a boolean. Simple but Smart. A complex solution is often a probem not well understood. But making it simple can take a serious effort even if with hindsight it was trivial. With formalisation and meta-modeling, it easier to think about the core and not the details and solutions are found.
There are many people who were involved in this roadmap. It were great times working with them as a team. I never programmed anything, even the C-language was never mastered. This was an advantage because it helped to think about behaviour, not about the implementation. VirtuosoNext and its visual application builder lives on under an Open technology License model The interested reader can browse www.altreonic.com [1]and the internet for papers and presentations.
Eric Verhulst, 16th March 2026
