IBM Future Systems Project
   HOME

TheInfoList



OR:

The Future Systems project (FS) was a research and development project undertaken in
IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
in the early 1970s to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware. The new systems were intended to replace the
System/370 The IBM System/370 (S/370) is a range of IBM mainframe computers announced as the successors to the IBM System/360, System/360 family on June 30, 1970. The series mostly maintains backward compatibility with the S/360, allowing an easy migrati ...
in the market some time in the late 1970s. There were two key components to FS. The first was the use of a single-level store that allows data stored on
secondary storage Computer data storage or digital data storage is a technology consisting of computer components and Data storage, recording media that are used to retain digital data. It is a core function and fundamental component of computers. The cent ...
like
disk drive Disc or disk may refer to: * Disk (mathematics), a two dimensional shape, the interior of a circle * Disk storage * Optical disc * Floppy disk Music * Disc (band), an American experimental music band * ''Disk'' (album), a 1995 EP by Moby Other ...
s to be referred to within a program as if it was data stored in
main memory Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processin ...
;
variables Variable may refer to: Computer science * Variable (computer science), a symbolic name associated with a value and whose associated value may be changed Mathematics * Variable (mathematics), a symbol that represents a quantity in a mathemat ...
in the code could point to objects in storage and they would invisibly be loaded into memory, eliminating the need to write code for file handling. The second was to include instructions corresponding to the statements in
high-level programming language A high-level programming language is a programming language with strong Abstraction (computer science), abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language ''elements'', be ea ...
s, allowing the system to directly run programs without the need for a
compiler In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
to convert from the language to
machine code In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binaryOn nonb ...
. One could, for instance, write a program in a
text editor A text editor is a type of computer program that edits plain text. An example of such program is "notepad" software (e.g. Windows Notepad). Text editors are provided with operating systems and software development packages, and can be used to c ...
and the machine would be able to run that directly. Combining the two concepts in a single system in a single step proved to be an impossible task. This concern was pointed out from the start by the engineers, but it was ignored by management and project leaders for many reasons. Officially started in the fall of 1971, by 1974 the project was moribund, and formally cancelled in February 1975. The single-level store was implemented in the
System/38 The System/38 is a discontinued minicomputer and midrange computer manufactured and sold by IBM. The system was announced in 1978. The System/38 has 48-bit computing, 48-bit addressing, which was unique for the time, and a novel database#Integrat ...
in 1978 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product.


History


370

The
System/360 The IBM System/360 (S/360) is a family of mainframe computer systems announced by IBM on April 7, 1964, and delivered between 1965 and 1978. System/360 was the first family of computers designed to cover both commercial and scientific applicati ...
was announced in April 1964. Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future. One significant change was the introduction of useful
integrated circuit An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components a ...
s (ICs), which would allow the many individual components of the 360 to be replaced with a smaller number of ICs. This would allow a more powerful machine to be built for the same price as existing models. By the mid-1960s, the 360 had become a massive best-seller. This influenced the design of the new machines, as it led to demands that the machines have complete backward compatibility with the 360 series. When the machines were announced in 1970, now known as the
System/370 The IBM System/370 (S/370) is a range of IBM mainframe computers announced as the successors to the IBM System/360, System/360 family on June 30, 1970. The series mostly maintains backward compatibility with the S/360, allowing an easy migrati ...
, they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes. A few new instructions were added and others cleaned up, but the system was largely identical from the programmer's point of view. The
recession of 1969–1970 The recession of 1969–1970 was a relatively mild recession in the United States. According to the National Bureau of Economic Research, the recession lasted for 11 months, beginning in December 1969 and ending in November 1970. It followed an ec ...
led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier. For the first time in decades, IBM's growth stalled. While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term.


Replacing the 370

Two months before the announcement of the 370s, the company once again started considering changes in the market and how that would influence future designs. In 1965,
Gordon Moore Gordon Earle Moore (January 3, 1929 – March 24, 2023) was an American businessman, engineer, and the co-founder and emeritus chairman of Intel Corporation. He proposed Moore's law which makes the observation that the number of transistors i ...
predicted that
integrated circuit An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components a ...
s would see exponential growth in the number of circuits they supported, today known as
Moore's Law Moore's law is the observation that the Transistor count, number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and Forecasting, projection of a historical trend. Rather than a law of ...
. IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured. An internal Corporate Technology Committee (CTC) study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that. If IBM was going to maintain its sales figures, it was going to have to sell 30 times as much memory in five years, and 900 times as much five years later. Similarly, hard disk cost was expected to fall ten times in the next ten years. To maintain their traditional 15% year-over-year growth, by 1980 they would have to be selling 40 times as much disk space and 3600 times as much memory. In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost. There was no way they could sell such a machine at their current pricing, if they tried, another company would introduce far less expensive systems. They could instead produce much more powerful machines at the same price points, but their customers were already underutilizing their existing systems. To provide a reasonable argument to buy a new high-end machine, IBM had to come up with reasons for their customers to need this extra power. Another strategic issue was that while the cost of computing was steadily going down, the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.


AFS

In 1969, Bob O. Evans, president of the IBM System Development Division which developed their largest
mainframe A mainframe computer, informally called a mainframe or big iron, is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterpris ...
s, asked
Erich Bloch Erich Bloch (January 9, 1925 – November 25, 2016) was a German-born American electrical engineer and administrator. He was involved with developing IBM's first transistorized supercomputer, 7030 Stretch, and mainframe computer, System/360. He ...
of the IBM Poughkeepsie Lab to consider how the company might use these much cheaper components to build machines that would still retain the company's profits. Bloch, in turn, asked Carl Conti to outline such systems. Having seen the term "future systems" being used, Evans referred to the group as Advanced Future Systems. The group met roughly biweekly. Among the many developments initially studied under AFS, one concept stood out. At the time, the first systems with
virtual memory In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a ver ...
(VM) were emerging, and the seminal
Multics Multics ("MULTiplexed Information and Computing Service") is an influential early time-sharing operating system based on the concept of a single-level memory.Dennis M. Ritchie, "The Evolution of the Unix Time-sharing System", Communications of t ...
project had expanded on this concept as the basis for a single-level store. In this concept, all data in the system is treated as if it is in
main memory Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processin ...
, and if the data is physically located on
secondary storage Computer data storage or digital data storage is a technology consisting of computer components and Data storage, recording media that are used to retain digital data. It is a core function and fundamental component of computers. The cent ...
, the VM system automatically loads it into memory when a program calls for it. Instead of writing code to read and write data in files, the programmer simply told the operating system they would be using certain data, which then appeared as objects in the program's memory and could be manipulated like any other variable. The VM system would ensure that the data was synchronized with storage when needed. This was seen as a particularly useful concept at the time, as the emergence of
bubble memory Bubble memory is a type of non-volatile memory, non-volatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas, known as ''bubbles'' or ''domains'', each storing one bit of data. The material is arrange ...
suggested that future systems would not have separate
core memory Core or cores may refer to: Science and technology * Core (anatomy), everything except the appendages * Core (laboratory), a highly specialized shared research resource * Core (manufacturing), used in casting and molding * Core (optical fiber), ...
and
disk drive Disc or disk may refer to: * Disk (mathematics), a two dimensional shape, the interior of a circle * Disk storage * Optical disc * Floppy disk Music * Disc (band), an American experimental music band * ''Disk'' (album), a 1995 EP by Moby Other ...
s, instead everything would be stored in a large amount of bubble memory. Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate
application programming interface An application programming interface (API) is a connection between computers or between computer programs. It is a type of software Interface (computing), interface, offering a service to other pieces of software. A document or standard that des ...
s (APIs) for the same data depending on whether it was loaded or not.


HLS

Evans also asked John McPherson at IBM's Armonk headquarters to chair another group to consider how IBM would offer these new designs across their many divisions. A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970. A key component of HLS was the idea that programming was more expensive than hardware. If a system could greatly reduce the cost of development, then that system could be sold for more money, as the overall cost of operation would still be lower than the competition. The basic concept of the System/360 series was that a single
instruction set architecture In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, ...
(ISA) would be defined that offered every possible instruction the
assembly language In computing, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence bet ...
programmer might desire. Whereas previous systems might be dedicated to scientific programming or currency calculations and had instructions for that sort of data, the 360 offered instructions for both of these and practically every other task. Individual machines were then designed that targeted particular workloads and ran those instructions directly in hardware and implemented the others in
microcode In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
. This meant any machine in the 360 family could run programs from any other, just faster or slower depending on the task. This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run. Although the 360's instruction set was large, those instructions were still low-level, representing single operations that the
central processing unit A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary Processor (computing), processor in a given computer. Its electronic circuitry executes Instruction (computing), instructions ...
(CPU) would perform, like "add two numbers" or "compare this number to zero".
Programming language A programming language is a system of notation for writing computer programs. Programming languages are described in terms of their Syntax (programming languages), syntax (form) and semantics (computer science), semantics (meaning), usually def ...
s and their links to the operating system allowed users to type in programs using high-level concepts like "open file" or "add these arrays". The
compiler In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
s would convert these higher-level abstractions into a series of
machine code In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binaryOn nonb ...
instructions. For HLS, the instructions would instead represent those higher-level tasks directly. That is, there would be instructions in the machine code for "open file". If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation. This worked hand-in-hand with the single-level store; to implement HLS, every bit of data in the system was paired with a ''descriptor'', a record that contained the type of the data, its location in memory, and its precision and size. As descriptors could point to arrays and record structures as well, this allowed the machine language to process these as atomic objects. By representing these much higher-level objects directly in the system, user programs would be much smaller and simpler. For instance, to add two arrays of numbers held in files in traditional languages, one would generally open the two files, read one item from each, add them, and then store the value to a third file. In the HLS approach, one would simply open the files and call add. The underlying operating system would map these into memory, create descriptors showing them both to be arrays and then the add instruction would see they were arrays and add all the values together. Assigning that value into a newly created array would have the effect of writing it back to storage. A program that might take a page or so of code was now reduced to a few lines. Moreover, as this was the natural language of the machine, the
command shell An operating system shell is a computer program that provides relatively broad and direct access to the system on which it runs. The term ''shell'' refers to how it is a relatively thin layer around an operating system. A shell is generally a ...
was itself programmable in the same way, there would be no need to "write a program" for a simple task like this, it could be entered as a command. The report concluded:


Compatible concerns

Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services. Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and
disk drives Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are cons ...
, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. IBM responded by refusing to service machines with these third-party add-ons, which led almost immediately to sweeping
anti-trust Competition law is the field of law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies. Competition law is implemented through public and private enforcement. It is also known as antitrust ...
investigations and many subsequent legal remedies. In 1969, the company was forced to end its bundling arrangements and announced they would sell software products separately.
Gene Amdahl Gene Myron Amdahl (November 16, 1922 – November 10, 2015) was an American computer architect and high-tech entrepreneur, chiefly known for his work on mainframe computers at IBM and later his own companies, especially Amdahl Corporation. ...
saw an opportunity to sell compatible machines without software; the customer could purchase a machine from Amdahl and the
operating system An operating system (OS) is system software that manages computer hardware and software resources, and provides common daemon (computing), services for computer programs. Time-sharing operating systems scheduler (computing), schedule tasks for ...
and other software from IBM. If IBM refused to sell it to them, they would be breaching their legal obligations. In early 1970, Amdahl quit IBM and announced his intention to introduce System/370 compatible machines that would be faster than IBM's high-end offerings but cost less to purchase and operate. At first, IBM was unconcerned. They made most of their money on software and support, and that money would still be going to them. But to be sure, in early 1971 an internal IBM task force, Project Counterpoint, was formed to study the concept. They concluded that the compatible mainframe business was indeed viable and that the basis for charging for software and services as part of the hardware price would quickly vanish. These events created a desire within the company to find some solution that would once again force the customers to purchase everything from IBM but in a way that would not violate antitrust laws. If IBM followed the suggestions of the HLS report, this would mean that other vendors would have to copy the
microcode In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
implementing the huge number of instructions. As this was software, if they did, those companies would be subject to copyright violations. At this point, the AFS/HLS concepts gained new currency within the company.


Future Systems

In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order-of-magnitude reduction in the costs of developing, operating and maintaining application software. The major objectives of the FS project were consequently stated as follows: *make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies, *diminish greatly the costs and efforts involved in application development and operations, *provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services) It was hoped that a new architecture making heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers.


Technology


Data access

One design principle of FS was a " single-level store" which extended the idea of
virtual memory In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a ver ...
(VM) to cover persistent data. In traditional designs, programs allocate memory to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a
hard drive A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating hard disk drive platter, pla ...
, and then read it back in the future. To ease these common operations, a number of
database engine A database engine (or storage engine) is the underlying software component that a database management system (DBMS) uses to create, read, update and delete (CRUD) data from a database. Most database management systems include their own application ...
s emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand. Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of
main memory Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processin ...
in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again. A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other backing store by VM, ''all'' memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart. This concept had been explored in the
Multics Multics ("MULTiplexed Information and Computing Service") is an influential early time-sharing operating system based on the concept of a single-level memory.Dennis M. Ritchie, "The Evolution of the Unix Time-sharing System", Communications of t ...
system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in
core Core or cores may refer to: Science and technology * Core (anatomy), everything except the appendages * Core (laboratory), a highly specialized shared research resource * Core (manufacturing), used in casting and molding * Core (optical fiber ...
with a far slower backing store in the form of a hard drive or
drum The drum is a member of the percussion group of musical instruments. In the Hornbostel–Sachs classification system, it is a membranophone. Drums consist of at least one membrane, called a drumhead or drum skin, that is stretched over a ...
. With the introduction of new forms of
non-volatile memory Non-volatile memory (NVM) or non-volatile storage is a type of computer memory that can retain stored information even after power is removed. In contrast, volatile memory needs constant power in order to retain data. Non-volatile memory typ ...
, most notably
bubble memory Bubble memory is a type of non-volatile memory, non-volatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas, known as ''bubbles'' or ''domains'', each storing one bit of data. The material is arrange ...
, that worked at speeds similar to core but had
memory density Density is a measure of the quantity of information bits that can be stored on a given physical space of a computer storage medium. There are three types of density: length (''linear density'') of track, area of the surface (''areal density''), ...
similar to a hard disk, it appeared a single-level store would no longer have any performance downside. Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's
application programming interface An application programming interface (API) is a connection between computers or between computer programs. It is a type of software Interface (computing), interface, offering a service to other pieces of software. A document or standard that des ...
(API) to retrieve memory. And those API calls would be based on particular hardware or
microcode In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it.


Processor

Another principle was the use of very high-level complex instructions to be implemented in
microcode In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
. As an example, one of the instructions, CreateEncapsulatedModule, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN,
COBOL COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural, and, since 2002, object-oriented language. COBOL is primarily ...
, and
PL/I PL/I (Programming Language One, pronounced and sometimes written PL/1) is a procedural, imperative computer programming language initially developed by IBM. It is designed for scientific, engineering, business and system programming. It has b ...
. In effect, FS was designed to be the ultimate complex instruction set computer ( CISC). Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware,
operating system An operating system (OS) is system software that manages computer hardware and software resources, and provides common daemon (computing), services for computer programs. Time-sharing operating systems scheduler (computing), schedule tasks for ...
software,
data base In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analy ...
software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry,
microcode In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions ...
, and conventional
software Software consists of computer programs that instruct the Execution (computing), execution of a computer. Software also includes design documents and specifications. The history of software is closely tied to the development of digital comput ...
. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or
millicode In computer architecture, millicode is a higher level of microcode used to implement part of the instruction set of a computer. The instruction set for millicode is a subset of the machine's native instruction set, omitting those instructions ...
. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects). The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC). Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (
RISC In electronics and computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a comp ...
). In the long run, the
IBM 801 The 801 was an experimental central processing unit (CPU) design developed by IBM during the 1970s. It is considered to be the first modern RISC design, relying on processor registers for all computations and eliminating the many variant addressi ...
RISC architecture, which eventually evolved into IBM's
POWER Power may refer to: Common meanings * Power (physics), meaning "rate of doing work" ** Engine power, the power put out by an engine ** Electric power, a type of energy * Power (social and political), the ability to influence people or events Math ...
,
PowerPC PowerPC (with the backronym Performance Optimization With Enhanced RISC – Performance Computing, sometimes abbreviated as PPC) is a reduced instruction set computer (RISC) instruction set architecture (ISA) created by the 1991 Apple Inc., App ...
, and
Power Power may refer to: Common meanings * Power (physics), meaning "rate of doing work" ** Engine power, the power put out by an engine ** Electric power, a type of energy * Power (social and political), the ability to influence people or events Math ...
architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.


Development


Project start

The FS project was officially started in September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it.


Project management

During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time. In Sowa's memo (see External Links, below) he noted ''The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.'' As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence.


Planned product lines

Three implementations of the FS architecture were planned: the top-of-line model was being designed in
Poughkeepsie, NY Poughkeepsie ( ) is a city within the Town of Poughkeepsie, New York. It is the county seat of Dutchess County, with a 2020 census population of 31,577. Poughkeepsie is in the Hudson River Valley region, midway between the core of the New Yor ...
, where IBM's largest and fastest computers were built; the next model down was being designed in Endicott, NY, which had responsibility for the mid-range computers; the model below that was being designed in Böblingen, Germany, and the smallest model was being designed in Hursley, UK. A continuous range of performance could be offered by varying the number of processors in a system at each of the four implementation levels. Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie).


Project end

The FS project was terminated in 1975. The reasons given for terminating the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero. One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, foretelling the IBM AS/400's partitioning of the System/38's single-level store. Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the
System/370 The IBM System/370 (S/370) is a range of IBM mainframe computers announced as the successors to the IBM System/360, System/360 family on June 30, 1970. The series mostly maintains backward compatibility with the S/360, allowing an easy migrati ...
emulator In computing, an emulator is Computer hardware, hardware or software that enables one computer system (called the ''host'') to behave like another computer system (called the ''guest''). An emulator typically enables the host system to run sof ...
on the same machine. The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and
assembly language In computing, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence bet ...
based applications to FS was in many cases likely to be greater than the cost of acquiring a new system.


Results

Although the FS project as a whole was terminated, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM
System/38 The System/38 is a discontinued minicomputer and midrange computer manufactured and sold by IBM. The system was announced in 1978. The System/38 has 48-bit computing, 48-bit addressing, which was unique for the time, and a novel database#Integrat ...
, which proved to be a good design for ease of programming, but it was woefully underpowered. The
AS/400 The IBM AS/400 (Application System/400) is a family of midrange computers from IBM announced in June 1988 and released in August 1988. It was the successor to the System/36 and System/38 platforms, and ran the OS/400 operating system. Lower-cost ...
inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the
System/360 The IBM System/360 (S/360) is a family of mainframe computer systems announced by IBM on April 7, 1964, and delivered between 1965 and 1978. System/360 was the first family of computers designed to cover both commercial and scientific applicati ...
instruction set. In later machines the lower-level instruction set was an extended version of the
PowerPC PowerPC (with the backronym Performance Optimization With Enhanced RISC – Performance Computing, sometimes abbreviated as PPC) is a reduced instruction set computer (RISC) instruction set architecture (ISA) created by the 1991 Apple Inc., App ...
instruction set, which evolved from John Cocke's
IBM 801 The 801 was an experimental central processing unit (CPU) design developed by IBM during the 1970s. It is considered to be the first modern RISC design, relying on processor registers for all computations and eliminating the many variant addressi ...
RISC developments. The dedicated hardware platform was replaced in 2008 by the
IBM Power Systems IBM Power Systems is a family of server computers from IBM that are based on its Power processors. It was created in 2008 as a merger of the System p and System i product lines. History IBM had two distinct POWER- and PowerPC-based hardwa ...
platform running the
IBM i IBM i (the ''i'' standing for ''integrated'') is an operating system developed by IBM for IBM Power Systems. It was originally released in 1988 as OS/400, as the sole operating system of the IBM AS/400 line of systems. It was renamed to i5/OS in 2 ...
operating system. Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line: * the
IBM 3081 The IBM 308X is a line of mainframe computers, of which the first model, the Model 3081 Processor Complex, was introduced November 12, 1980.IBM used a capital X when referring to 308X, as did others needing an official reference; see the Congressi ...
mainframe computer, which was essentially the top-of-the line machine designed in Poughkeepsie, using the System/370 emulator microcode, and with the FS microcode removed and used * the 3800
laser printer Laser printing is an electrostatic digital printing process. It produces high-quality text and graphics (and moderate-quality photographs) by repeatedly passing a laser beam back and forth over a Electric charge, negatively charged cylinder call ...
, and some machines that would lead to the IBM 3279 terminal and GDDM * the
IBM 3850 The IBM 3850 Mass Storage System (MSS) was an online tape library used to hold large amounts of infrequently accessed data. It was one of the earliest examples of nearline storage. History Starting in the late-1960s IBM's lab in Boulder, Color ...
automatic magnetic tape library * the
IBM 8100 The IBM 8100 Information System, announced Oct. 3, 1978, was at one time IBM’s principal distributed processing engine, providing local processing capability under two incompatible operating systems ( DPPX and DPCX) and was a follow-on to the ...
mid-range computer, which was based on a CPU called the ''Universal Controller'', which had been intended for FS input/output processing * network enhancements concerning
VTAM Virtual Telecommunications Access Method (VTAM) is the IBM subsystem that implements Systems Network Architecture (SNA) for mainframe environments. VTAM provides an application programming interface (API) for communication applications, and contro ...
and NCP


References


Citations


Bibliography

* * * *


External links

*
An internal memo
by John F. Sowa. This outlines the technical and organizational problems of the FS project in late 1974.
Overview of IBM Future Systems
{{DEFAULTSORT:Ibm Future Systems Project Computing platforms Future Systems project Information technology projects