Composition vs. Inheritance
Ok, so we’ve seen that concurrency and inheritance don’t really mix that well. In fact, they don’t really mix at all. GoF say that we should prefer composition to inheritance. But I’ll be taking the extreme approach and asking “Can Inheritance solve problems that Composition cannot?”
I’m suggesting that we probably want to give up inheritance when we make the move to lightweight independent processes. Inheritance (and practically all it’s attributes) can be simulated using process composition.
- Interface Inheritance
- Type Inheritance
Supposing that we wish to flesh-out a concrete implementation of an abstract interface, we have only to code up a process that responds to the correct messages. Unfortunately, if we forgo inheritance we also lose strict compiler checking, and are forced to check the completeness of implementation using debuggers or Unit Tests. We are still allowed to implement partial functionality, or to make a whole chain of abstract interfaces in this way. But the loss of compiler checking has important ramifications and drastically affects program architecture. But the generic methodology is to have the subprocess intercept all messages, and then send only the ones it can’t deal with onto the parent. Things get complicated when multiple inheritance occurs. Python’s duck typing has an interesting means of method resolution.
Using the Liskov Substitution Principle as our model, we wish to be able to substitute a process (or entire process network) and have it behave according to the rules for a particular type. Normally this is done through the language’s type system in a rather simplistic manner (by looking the types up in a derivation tree). Type inheritance implies interface inheritance (otherwise we might try to call a method that doesn’t exist or send a message that can’t be received). Completeness of implementation can be achieved either by forcing the compiler to do type checking on the message, or to invoke an arbitrary rule: such as simply discarding messages that are not understood (with or without warnings/error messages).
We see that these are tightly-coupled concepts, for generality we’d like for a process understands all the messages a particular type can receive to be allowed to masquerade as an instance of that type, even though it might not be derived as such in the formal type system. Furthermore, we’d like the compiler to inform us of this possibility during the coding stage. But given that the most useful thing about a Concurrency Oriented Language is the ability to change the interconnection of the processes at runtime. This weakens the introspective abilities of the compiler, for it will no longer be able to report on the full list of messages accepted by a process because that happens much later, at runtime. In fact, we also might have to give up the entire notion of type if a process can be sent as a message to another process (and who wouldn’t want to do that?), because if we can’t type-check our processes, and they can be used as a message, then we can’t type check our messages either.
So, ultimately, what we want from the compiler is provable correctness, and what we want from the language is an easy means of code reuse. While we can always implement the calling and precedence rules ourselves when conjuring a network of processes, including delegation for multiple inheritance. This is a pain, and subject to programmer error.
I think our best bet is to provide run-time tools that will perform this of introspection. Erlang has been quite good in this respect. The real questions now become:
- What are the common process structures (as opposed to data structures)?
- Can we reasonably extend the type checking to include directed graphs with loops?
- What are the new design patterns for processes? and how are these different from object design patterns?
- How do we debug a concurrent system?
- Do we have to implement the inheritance explicitly ourselves everytime? or are there linguistic constructs that will help?
Next time I’ll move focus onto a thought I had about type systems following arbitrary directed graphs. I mean, not everything in the world is best modeled as a hierarchical tree.