The 1960s: The Era of Timesharing and Multiprogramming:
The systems of the 1960s were also batch processing systems but they were able to take better advantage of the computer resources by running several jobs at once. They contained many peripheral devices such as card readers, card punches, printers, tape drives and disk drives. Any one job rarely utilized all of a computer’s resources effectively. It was observed by operating system designers that when one job was waiting for an input-output operation to complete before the job could continue using the processor, some other could use the idle processor. Similarly, when one job was using the processor, other jobs could be using the various I/O devices. The operating system designers realized that running a mixture of diverse jobs appeared to be the best way to optimize computer utilization. The process by which they do so is called multiprogramming in which several users simultaneously compete for system resources. The job currently waiting for I/O will yield the CPU to another job ready to do calculations if another job is waiting. Thus, both input/output and CPU processes can occur simultaneously. This greatly increased CPU utilization and system throughput. To take maximum advantage of multiprogramming, it is necessary for several jobs to reside in the computer’s main storage at once. Thus, when one job requests input/output, the CPU maybe immediately switched to another, and may do calculations without delay. As a result, multiprogramming required more storage than a single system. The operating systems of the 1960s, while being capable of doing multiprogramming, were limited by
the memory capacity. This led to the various designs of multiprogramming such as variable position multiprogramming that helped to utilize the storage capacity much more efficiently (Smith, 1980).
In the late 1950 and 1960, under the batch processing mode, users were not normally present in the computing facility when their jobs were run. Jobs were generally submitted on punched cards and magnetic tapes. The jobs would remain in the input tables for hours or even days until they could be loaded into the computer for execution. The slightest error in a program, even a missing period or comma, would “dump” the job, at which point the user would correct the error, resubmit the job, and once again wait hours or days before the next execution of the job could be attempted. Software development in such an environment was particularly a slow process (Weizer, 1981).
University environments provided a fertile ground for dealing with such limitations. Student programs tended not to be uniform from week to week, or from one student to another, and it was important that students received clear messages about what kinds of errors they made. In 1959-1960, a system called MAD (Michigan Algorithmic Decoder) was developed at the University of Michigan. MAD was based on ALGOL, but unlike ALGOL, is took care of details of running a job in ways that few other languages could do.
MAD offered fast compilation, essential for a teaching environment and it had good diagnostics to help students find and correct errors. These qualities made the system not only attractive to the student programmer but also to various researchers at the University of Michigan Campus (Rosin, 1969). While there were attempts to provide more diagnostics and error-correcting mechanisms by the groups such as those in the University of Michigan, another group tried to develop systems that would allow greater access to the computing systems and reduce the waiting time for jobs to execute. One of the major developments in this direction was timesharing system which enabled many users to share computer resources simultaneously. In the timesharing mode, the computer spends a fixed amount of time on one program before proceeding to another. Each user is allocated a tiny slice of time (say, two milliseconds). The computer performs whatever operations in can for that user in the allocated time and then utilizes the next allocated time for the other users. What made such a concept possible was the difference between the few milliseconds (at least) between a user’s keystrokes and the ability of a computer to fetch and execute dozens, perhaps hundreds of simple instructions. The few seconds a user might pause to ponder the next command to type in was time enough for a computer, even in those days, to let another user’s job to execute, while giving the illusion to each user that the complete machine (including I/O devices) and its software were at his or her disposal. Although this concept seems similar to multiprogramming, in multiprogramming, the computer works on one program until it reaches a logical stopping point, such as an input/output event, while for timesharing system, every job is allocated a specific small time period (Laudon & Laudon, 1997).
MIT’s Department of Electrical Engineering was one of the pioneers of the timesharing system under the guidance of John McCarthy, Robert Fano and Fernando Corbato. Since 1957, it had been running a computer IBM 704 in a batch-processing mode. However, the instructions of programming and the development of software were very difficult given the long turnaround time, the time between the submission of a job and the return of results, of hours and even days. This motivated them to develop a system that would reduce the turnaround time substantially. This led MIT to implement the first timesharing system in November 1961, called CTSS – Compatible Time-Sharing System. The demonstration version allowed just three users to share the computer at a particular time. It reduced the turnaround time to minutes and later to seconds. It
demonstrated the value of interactive computing as the timesharing system was also called (Crisman, 1964).Timesharing systems helped facilitate the software development process significantly. With turnaround time reduced to minutes, no longer a person writing a new program had to wait hours or days to correct errors. With timesharing, a programmer could enter a program, compile it, receive a list of syntax errors, correct them
immediately and re-execute this cycle until the program is free of syntax errors thereby reducing development time significantly (Crisman, 1964). Within a year of MIT’s successful demonstration, several other universities,
research organizations and manufacturers, noting the advantages of timesharing system, had begun to develop their own systems. Many of these systems were further evolved into next generation of operating systems. For example, MIT developed Multics operating system as the successor of CTSS. Multics, although was not successful, gave rise to perhaps the most versatile operating system existing even today – the UNIX system. In 1964, IBM also developed CP/CMS system at its Cambridge Scientific Center, a timesharing system for its new System/360 mainframe which eventually became the major operating system – VM operating system – for its System/360 and System/370 computers (Weizer, 1981).
source : Prof. Tim Bergin
Tidak ada komentar:
Posting Komentar