Some thoughts on concurrency in computing. First, some attempts at some definitions:
- Task: a sequential execution context, often called a “process” or a “thread”
- Process: a task that does not implicitly share resources (i.e. memory) with other processes
- Thread: a task that does implicitly share resources with other threads
- Preëmptive multitasking: the scheduler may interrupt a task in order to let another task run at any time
- Coöperative multitasking: a running task must explicitly give the scheduler a chance to let another task run
The ideal, IMHO, is to have preëmptively scheduled processes but coöperatively scheduled threads within the processes. This is because preëmption means you have to be (extra) careful about sharing resources, so sharing resources should be explicit. Or—to put it another way—implicitly sharing resources should mean using coöperative scheduling.
I think this also means that the system should provide good support for bundling a group of processes together and treating them as a single application. Too often, I think we use threads only because we tend to think of an application as being a single process.
(There are ways to make coöperative multitasking look more like preëmptive multitasking to the programmer.)
Some people will, of course, point to some higher level abstraction—like that used by Erlang—as being the holy grail of concurrency. While Erlang’s approach is a good and widely used approach, it doesn’t replace the lower-level aspects of concurrency upon which it is built.
No comments:
Post a Comment