The RC 4000 Multiprogramming System (also termed Monitor or RC 4000 depending on reference) is a discontinued
operating system
An operating system (OS) is system software that manages computer hardware, software resources, and provides common daemon (computing), services for computer programs.
Time-sharing operating systems scheduler (computing), schedule tasks for ef ...
developed for the RC 4000
minicomputer
A minicomputer, or colloquially mini, is a class of smaller general purpose computers that developed in the mid-1960s and sold at a much lower price than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, ...
in 1969. For clarity, this article mostly uses the term Monitor.
Overview
The RC 4000 Multiprogramming System is historically notable for being the first attempt to break down an operating system into a group of interacting programs communicating via a
message passing
In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporti ...
kernel
Kernel may refer to:
Computing
* Kernel (operating system), the central component of most operating systems
* Kernel (image processing), a matrix used for image convolution
* Compute kernel, in GPGPU programming
* Kernel method, in machine lea ...
. RC 4000 was not widely used, but was highly influential, sparking the
microkernel
In computer science, a microkernel (often abbreviated as μ-kernel) is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, ...
concept that dominated operating system research through the 1970s and 1980s.
Monitor was created largely by one programmer,
Per Brinch Hansen, who worked at
Regnecentralen where the RC 4000 was being designed. Leif Svalgaard participated in implementing and testing Monitor. Brinch Hansen found that no existing operating system was suited to the new machine, and was tired of having to adapt existing systems. He felt that a better solution was to build an underlying kernel, which he referred to as the ''nucleus'', that could be used to build up an operating system from interacting programs.
Unix
Unix (; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, a ...
, for instance, uses small interacting programs for many tasks, transferring data through a system called ''
pipelines or pipes''. However, a large amount of fundamental code is integrated into the kernel, notably things like
file system
In computing, file system or filesystem (often abbreviated to fs) is a method and data structure that the operating system uses to control how data is stored and retrieved. Without a file system, data placed in a storage medium would be one lar ...
s and program control. Monitor would relocate such code also, making almost the entire system a set of interacting programs, reducing the kernel (nucleus) to a communications and support system only.
Monitor used a pipe-like system of shared memory as the basis of its
inter-process communication (IPC). Data to be sent from one process to another was copied into an empty memory
data buffer, and when the receiving program was ready, back out again. The buffer was then returned to the pool. Programs had a very simple application programming interface (
API) for passing data, using an
asynchronous set of four methods. Client applications send data with
send message
and could optionally block using
wait answer
. Servers used a mirroring set of calls,
wait message
and
send answer
. Note that messages had an implicit "return path" for every message sent, making the semantics more like a
remote procedure call
In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared network), which is coded as if it were a normal (lo ...
than
Mach's completely
input/output
In computing, input/output (I/O, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals ...
(I/O) based system.
Monitor divided the application space in two: ''internal processes'' were the execution of traditional programs, started on request, while ''external processes'' were effectively device drivers. External processes were handled outside of user space by the nucleus, although they could be started and stopped just like any other program. Internal processes were started in the context of the ''parent'' that launched them, so each user could effectively build up their own operating system by starting and stopping programs in their own context.
Scheduling
A schedule or a timetable, as a basic time-management tool, consists of a list of times at which possible tasks, events, or actions are intended to take place, or of a sequence of events in the chronological order in which such things are i ...
was left entirely to the programs, if required at all (in the 1960s,
computer multitasking
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result ...
was a feature of debatable value). One user could start a session in a
pre-emptive multitasking environment, while another might start in a single-user mode to run
batch processing
Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically ...
at higher speed.
Real-time scheduling could be supported by sending messages to a timer process that would only return at the appropriate time.
These two areas have seen the vast majority of development since Monitor's release, driving newer designs to use hardware to support messaging, and supporting threads within applications to reduce launch times. For instance, Mach required a
memory management unit
A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit having all memory references passed through itself, primarily performing the translation of virtual memory addresses to physical ...
to improve messaging by using the
copy-on-write
Copy-on-write (COW), sometimes referred to as implicit sharing or shadowing, is a resource-management technique used in computer programming to efficiently implement a "duplicate" or "copy" operation on modifiable resources. If a resource is dupl ...
protocol and mapping (instead of copying) data from process to process. Mach also used threading extensively, allowing the external programs, or ''servers'' in more modern terms, to easily start up new handlers for incoming requests. Still, Mach IPC was too slow to make the microkernel approach practically useful. This only changed when
Jochen Liedtke's
L4 microkernel demonstrated IPC overheads reduced by an order-of-magnitude.
See also
*
THE multiprogramming system
*
Timeline of operating systems
This article presents a timeline of events in the history of computer operating systems from 1951 to the current day. For a narrative explaining the overall developments, see the History of operating systems.
1950s
* 1951
** LEO I 'Lyons Elect ...
References
*
RC 4000 Software: Multiprogramming SystemRC 4000 Reference Manualat bitsavers.org
{{Microkernel
Microkernel-based operating systems
Microkernels
1969 software