welcome

.....welcome to world of operating systems.....

Kamis, 26 April 2012

History of windows


It’s the 1970s. At work, we rely on typewriters. If we need to copy a document, we likely use a mimeograph or carbon paper. Few have heard of microcomputers, but two young computer enthusiasts, Bill Gates and Paul Allen, see that personal computing is a path to the future.


In 1975, Gates and Allen form a partnership called Microsoft. Like most start-ups, Microsoft begins small, but has a huge vision—a computer on every desktop and in every home. During the next years, Microsoft begins to change the ways we work.
Getting started: Microsoft co-founders Paul Allen (left) and Bill Gates
In June 1980, Gates and Allen hire Gates’ former Harvard classmate Steve Ballmer to help run the company. The next month, IBM approaches Microsoft about a project code-named "Chess." In response, Microsoft focuses on a new operating system—the software that manages, or runs, the computer hardware and also serves to bridge the gap between the computer hardware and programs, such as a word processor. It’s the foundation on which computer programs can run. They name their new operating system "MS‑DOS."


When the IBM PC running MS‑DOS ships in 1981, it introduces a whole new language to the general public. Typing “C:” and various cryptic commands gradually becomes part of daily work. People discover the backslash (\) key.
MS‑DOS is effective, but also proves difficult to understand for many people. There has to be a better way to build an operating system.


Geek trivia: MS‑DOS stands for Microsoft Disk Operating System.



1982–1985: Introducing Windows 1.0

The Windows 1.0 desktop


Microsoft works on the first version of a new operating system. Interface Manager is the code name and is considered as the final name, but Windows prevails because it best describes the boxes or computing “windows” that are fundamental to the new system. Windows is announced in 1983, but it takes a while to develop. Skeptics call it “vaporware.”



On November 20, 1985, two years after the initial announcement, Microsoft ships Windows 1.0. Now, rather than typing MS‑DOS commands, you just move a mouse to point and click your way through screens, or “windows.” Bill Gates says, “It is unique software designed for the serious PC user…"


There are drop-down menus, scroll bars, icons, and dialog boxes that make programs easier to learn and use. You're able to switch among several programs without having to quit and restart each one. Windows 1.0 ships with several programs, including MS‑DOS file management, Paint, Windows Writer, Notepad, Calculator, and a calendar, card file, and clock to help you manage day-to-day activities. There’s even a game—Reversi.


Geek trivia: Remember floppy disks and kilobytes? Windows 1.0 requires a minimum of 256 kilobytes (KB), two double-sided floppy disk drives, and a graphics adapter card. A hard disk and 512 KB memory is recommended for running multiple programs or when using DOS 3.0 or higher.


1987–1992: Windows 2.0–2.11—More windows, more speed

On December 9, 1987 Microsoft releases Windows 2.0 with desktop icons and expanded memory. With improved graphics support, you can now overlap windows, control the screen layout, and use keyboard shortcuts to speed up your work. Some software developers write their first Windows–based programs for this release.
Windows 2.0Windows 2.0
Windows 2.0 is designed for the Intel 286 processor. When the Intel 386 processor is released, Windows/386 soon follows to take advantage of its extended memory capabilities. Subsequent Windows releases continue to improve the speed, reliability, and usability of the PC.
In 1988, Microsoft becomes the world’s largest PC software company based on sales. Computers are starting to become a part of daily life for some office workers.
Geek trivia: Control Panel makes its first appearance in Windows 2.0.

1990–1994: Windows 3.0Windows NT—Getting the graphics

On May 22, 1990, Microsoft announces Windows 3.0, followed shortly by Windows 3.1 in 1992. Taken together, they sell 10 million copies in their first 2 years, making this the most widely used Windows operating system yet. The scale of this success causes Microsoft to revise earlier plans. Virtual Memory improves visual graphics. In 1990 Windows starts to look like the versions to come.

Windows now has significantly better performance, advanced graphics with 16 colors, and improved icons. A new wave of 386 PCs helps drive the popularity of Windows 3.0. With full support for the Intel 386 processor, programs run noticeably faster. Program Manager, File Manager, and Print Manager arrive in Windows 3.0.

Bill Gates shows the newly-released Windows 3.0
Bill Gates shows the newly-released Windows 3.0
Windows software is installed with floppy discs bought in large boxes with heavy instruction manuals.

The popularity of Windows 3.0 grows with the release of a new Windows software development kit (SDK), which helps software developers focus more on writing programs and less on writing device drivers.

Windows is increasingly used at work and home and now includes 
games like Solitaire, Hearts, and Minesweeper. An advertisement: “Now you can use the incredible power of Windows 3.0 to goof off.”

Windows for Workgroups 3.11 adds peer-to-peer workgroup and domain networking support and, for the first time, PCs become an integral part of the emerging client/server computing evolution.

Windows NT

When Windows NT releases on July 27, 1993, Microsoft meets an important milestone: the completion of a project begun in the late 1980s to build an advanced new operating system from scratch. "Windows NT represents nothing less than a fundamental change in the way that companies can address their business computing requirements," Bill Gates says at its release.
Unlike Windows 3.1, however, Windows NT 3.1 is a 32-bit operating system, which makes it a strategic business platform that supports high-end engineering and scientific programs.
Geek trivia: The group that develops Windows NT was originally called the "Portable Systems" team.



1995–2001: Windows 95—the PC comes of age (and don't forget the Internet)

The Windows 95 desktop
On August 24, 1995, Microsoft releases Windows 95, selling a record-setting 7 million copies in the first five weeks. It’s the most publicized launch Microsoft has ever taken on. Television commercials feature the Rolling Stones singing "Start Me Up" over images of the new Start button. The press release simply begins: “It’s here.”

This is the era of fax/modems, e‑mail, the new online world, and dazzling multimedia games and educational software. Windows 95 has built-in Internet support, dial-up networking, and new Plug and Play capabilities that make it easy to install hardware and software. The 32-bit operating system also offers enhanced multimedia capabilities, more powerful features for mobile computing, and integrated networking.

At the time of the Windows 95 release, the previous Windows and MS‑DOS operating systems are running on about 80 percent of the world’s PCs. Windows 95 is the upgrade to these operating systems. To run Windows 95, you need a PC with a 386DX or higher processor (486 recommended) and at least 4 MB of RAM (8 MB of RAM recommended). Upgrade versions are available for both floppy disk and CD-ROM formats. It’s available in 12 languages.

Windows 95 features the first appearance of the Start menu, taskbar, and minimize, maximize, and close buttons on each window.

Catching the Internet wave

In the early 1990s, tech insiders are talking about the Internet—a network of networks that has the power to connect computers all over the world. In 1995, Bill Gates delivers a memo titled “The Internet Tidal Wave,” and declares the Internet as “the most important development since the advent of the PC.”
In the summer of 1995, the first version of Internet Explorer is released. The browser joins those already vying for space on the World Wide Web.
Geek trivia: In 1996, Microsoft releases Flight Simulator for Windows 95—the first time in its 14-year history that it’s available for Windows.

1998–2000: Windows 98Windows 2000Windows Me

Windows 98

The Windows 98 desktop
Released on June 25, 1998, Windows 98 is the first version of Windows designed specifically for consumers. PCs are common at work and home, and Internet cafes where you can get online are popping up. Windows 98 is described as an operating system that “Works Better, Plays Better.”
With Windows 98, you can find information more easily on your PC as well as the Internet. Other improvements include the ability to open and close programs more quickly, and support for reading DVD discs and universal serial bus (USB) devices. Another first appearance is the Quick Launch bar, which lets you run programs without having to browse the Start menu or look for them on the desktop.
Geek trivia: Windows 98 is the last version based on MS‑DOS.
Windows 98Windows 98

Windows Me

The Windows Me media experience
Designed for home computer use, Windows Me offers numerous music, video, and home networking enhancements and reliability improvements compared to previous versions.
First appearances: System Restore, a feature that can roll back your PC software configuration to a date or time before a problem occurred. Windows Movie Maker provides users with the tools to digitally edit, save, and share home videos. And with Microsoft Windows Media Player 7 technologies, you can find, organize, and play digital media.
Geek trivia: Technically speaking, Windows Me was the last Microsoft operating system to be based on theWindows 95 code base. Microsoft announced that all future operating system products would be based on the Windows NT and Windows 2000 kernel.



Windows 2000 Professional

Windows 2000 ProfessionalWindows 2000Professional
More than just the upgrade to Windows NT Workstation 4.0, Windows 2000 Professional is designed to replaceWindows 95Windows 98, and Windows NT Workstation 4.0 on all business desktops and laptops. Built on top of the proven Windows NT Workstation 4.0 code base, Windows 2000 adds major improvements in reliability, ease of use, Internet compatibility, and support for mobile computing.
Among other improvements, Windows 2000 Professional simplifies hardware installation by adding support for a wide variety of new Plug and Play hardware, including advanced networking and wireless products, USB devices, IEEE 1394 devices, and infrared devices.
Geek trivia: The nightly stress test performed on Windows 2000 during development is the equivalent of three months of run time on up to 1,500 computers.

2001–2005: Windows XP—Stable, usable, and fast

The Windows XP Home Edition desktop
On October 25, 2001, Windows XP is released with a redesigned look and feel that's centered on usability and a unified Help and Support services center. It’s available in 25 languages. From the mid-1970s until the release of Windows XP, about 1 billion PCs have been shipped worldwide.
For MicrosoftWindows XP will become one of its best-selling products in the coming years. It’s both fast and stable. Navigating the Start menu, taskbar, and Control Panel are more intuitive. Awareness of computer viruses and hackers increases, but fears are to a certain extent calmed by the online delivery of security updates. Consumers begin to understand warnings about suspicious attachments and viruses. There’s more emphasis on Help and Support.
Ship it: Windows XP Professional rolls to retail storesShip it: Windows XP Professional rolls to retail stores
Windows XP Home Edition offers a clean, simplified visual design that makes frequently used features more accessible. Designed for home use, Windows XP offers such enhancements as the Network Setup Wizard, Windows Media Player,Windows Movie Maker, and enhanced digital photo capabilities.
Windows XP Professional brings the solid foundation of Windows 2000to the PC desktop, enhancing reliability, security, and performance. With a fresh visual design, Windows XP Professional includes features for business and advanced home computing, including remote desktop support, an encrypting file system, and system restore and advanced networking features. Key enhancements for mobile users include wireless 802.1x networking support, Windows Messenger, and Remote Assistance.
Windows XP has several editions during these years:
  • Windows XP 64-bit Edition (2001) is the first Microsoft operating system for 64-bit processors designed for working with large amounts of memory and projects such as movie special effects, 3D animations, engineering, and scientific programs.
  • Windows XP Media Center Edition (2002) is made for home computing and entertainment. You can browse the Internet, watch live television, enjoy digital music and video collections, and watch DVDs.
  • Windows XP Tablet PC Edition (2002) realizes the vision of pen-based computing. Tablet PCs include a digital pen for handwriting recognition and you can use the mouse or keyboard, too.
Geek trivia: Windows XP is compiled from 45 million lines of code.

2006–2008: Windows Vista—Smart on security

The Windows Vista desktop
Windows Vista is released in 2006 with the strongest security system yet. User Account Control helps prevent potentially harmful software from making changes to your computer. In Windows Vista Ultimate,BitLocker Drive Encryption provides better data protection for your computer, as laptop sales and security needs increase. Windows Vista also features enhancements to Windows Media Player as more and more people come to see their PCs as central locations for digital media. Here you can watch television, view and send photographs, and edit videos.
Windows Vista UltimateWindows Vista Ultimate
Design plays a big role in Windows Vista, and features such as the taskbar and the borders around windows get a brand new look. Search gets new emphasis and helps people find files on their PCs faster. Windows Vista introduces new editions that each have a different mix of features. It's available in 35 languages. The redesigned Start button makes its first appearance in Windows Vista.
Geek trivia: More than 1.5 million devices are compatible with Windows Vista at launch.

2009–Today: Windows 7 and counting...

The Windows 7 desktop
By the late 2000s, the wireless world has arrived. When Windows 7 is released in October 2009, laptops are outselling desktop PCs and it’s common to get online at public wireless hotspots like coffee shops. Wireless networks can be created at the office or at home.
Windows 7 includes many features, such as new ways to work with windows—Snap, Peek, and Shake.Windows Touch makes its debut, enabling you to use your fingers to browse the web, flip through photos, and open files and folders. You can stream music, videos, and photos from your PC to a stereo or TV.
By the fall of 2010, Windows 7 is selling seven copies a second—the fastest-selling operating system in history.
Improvements to the Windows 7 taskbar include live thumbnail previewsImprovements to the Windows 7 taskbar include live thumbnail previews
Geek trivia: Windows 7 is evaluated by 8 million beta testers worldwide before it's released.

What's next?

Many laptops no longer have a slot for DVDs and some have solid state drives rather than conventional hard disks. Most everything is streamed, saved on flash drives, or saved in the "Cloud"—an online space for sharing files and storage. Windows Live—free programs and services for photos, movies, instant messaging, e‑mail, and social networking—is seamlessly integrated with Windows so that you can keep in touch from your PC, phone, or the web, extending Windows to the Cloud.
Meanwhile, work is underway for the next version of Windows.





Sabtu, 21 April 2012

Implementation of LRU

There are several ways to implement the LRU algorithm. However, there are two fairly well-known are counter and stack
counter
How this is done by using a counter or a logical clock. Each page has a value that was originally initialized to 0. When access to a new page, the value of the clock on the page will get one. The more often the page is accessed, the greater the value of its counter and vice versa. To do so required an extra write to memory. In addition to containing the pages that are loaded, the memory also contains the counter of each page. The page is a page that has replaced the smallest clock which is the most frequently accessed pages. Disadvantages of this approach is counter to require additional support hardware.
How this is done by using a stack that indicates which pages in memory. Every time a page is accessed, will be placed at the top of the stack. If there is a page that needs to be replaced, then the page is located at the bottom of the stack will be replaced so that every time a new page is accessed not have to look back pages to be replaced. Compared to the implementation of the counter, the cost to implement the LRU algorithm using a stack would be more expensive because the entire contents of the stack must be updated each time you access the page, while the counter, the counter only changed pages are accessed, no need to change the counter from all pages that existing.

Image. LRU algorithm with Stack
Image. LRU algorithm
Another algorithm
Actually there are many algorithms which replace the page other than the three main algorithms that have been discussed earlier (the main does not mean most often used). The following are two examples of other algorithms are also quite popular and easy to implement.
The first algorithm is a second chance algorithm. Second chance algorithm is based on an enhanced FIFO algorithm. This algorithm uses the additional reference bit whose value is 0 or 1. If the FIFO using the stack, then the second chance using a circular queue. The new page is loaded or a new use will be given the value 1 in its reference bit. Pages that its reference bit set to 1 will not be directly replaced even though he is on the bottom line (in contrast to FIFO).


The sequence of work steps second chance algorithm is as follows:

  • When a page fault and there is no free frame, it will be carried out raids (search for victims) the page reference bit is 0 it starts from the bottom of the queue (such as FIFO).
  • Each page that is not in-swap (because of its reference bit value 1), each as raids passed its reference bit is set to 0.
  • If found a page that its reference bit is 0, then the page that are swap.
  • If the queue until the end of a page not found its reference bit is 0, then the raid carried out again from scratch.
In the picture below, we will illustrate the second chance algorithm and the algorithm FIFO as a comparison.



The second algorithm is a random algorithm. Random algorithm is fairly simple algorithm is also in addition to the FIFO algorithm. In this algorithm, the selected page is victim selected at random . But the algorithm is relatively low cost, because it does not require a stack, queue, or counter. Compared to FIFO, average page fault rate case showed random algorithm is lower than the FIFO algorithm. While compared to LRU, random algorima is superior in terms of looping memory reference, because random algorithm does not require looping.

Image. Random algorithm


source : Kemal Nasir & Renggo Pribadi @CSUT

Algorithm in operating system

This algorithm is the simplest algorithm. The principle of this algorithm is like the principle of the queue (no priority queue), the inside first page it will come out first as well. These algorithms use a stack data structure. If no free frame during a page fault, the victim is selected stack frame in the bottom, the page that was the longest in memory.

At first, the algorithm is considered to adequately address concerns about the turn of the page up in the 70's, Belady find oddities in this algorithm is then known as Belady anomaly. Belady anomaly is a condition in which the page fault rate increases with the number of frames, as can be seen in the example below.

When the number of frames plus from 3 frames to 4 frames, the number of page faults that occur to be increased (from 14 page fault until 15 page fault). This usually occurs in the case of a page you just want a swap-out in advance. Therefore, sought for other algorithms that can better handling of page changes, as discussed below.

Optimal algorithm
This algorithm is the most optimal algorithm as the name implies. The principle of this algorithm is to replace the page that will not be used anymore for a long time, so that efficiency will increase the turn of the page (page fault that occurred less) and free from Belady anomaly. This algorithm has the lowest page fault rate of all algorithms in all cases. However, the optimal algorithm is not yet perfect because it means it is very difficult to implement. The system can not find any pages that will be used next.

Algoritma LRU (Least Recently Used)
Optimal algorithm is very difficult to its implementation, then made another algorithm which performance is close to the optimal algorithm with a slightly greater cost. The algorithm is to replace the old pages which are not needed. The assumption was that the page has not been used are no longer needed and most likely a new page is loaded will be used again.
Just as the optimal algorithm, LRU algorithm does not have Belady anomaly. The algorithm uses a linked list to record which pages long unused. Linked list is what makes the cost to grow, because they have to update the linked list each time a page in the access. The page is located at the front is a linked list of recently used pages. The longer it is not used, pages will increasingly be in the back and in the last position is the most pages long unused and ready for the swap.

source : Kemal Nasir & Renggo Pribadi @CSUI

Jumat, 13 April 2012

History of Operating Systems..part.3

The 1960s: The Era of Timesharing and Multiprogramming:
The systems of the 1960s were also batch processing systems but they were able to take better advantage of the computer resources by running several jobs at once. They contained many peripheral devices such as card readers, card punches, printers, tape drives and disk drives. Any one job rarely utilized all of a computer’s resources effectively. It was observed by operating system designers that when one job was waiting for an input-output operation to complete before the job could continue using the processor, some other could use the idle processor. Similarly, when one job was using the processor, other jobs could be using the various I/O devices. The operating system designers realized that running a mixture of diverse jobs appeared to be the best way to optimize computer utilization. The process by which they do so is called multiprogramming in which several users simultaneously compete for system resources. The job currently waiting for I/O will yield the CPU to another job ready to do calculations if another job is waiting. Thus, both input/output and CPU processes can occur simultaneously. This greatly increased CPU utilization and system throughput. To take maximum advantage of multiprogramming, it is necessary for several jobs to reside in the computer’s main storage at once. Thus, when one job requests input/output, the CPU maybe immediately switched to another, and may do calculations without delay. As a result, multiprogramming required more storage than a single system. The operating systems of the 1960s, while being capable of doing multiprogramming, were limited by
the memory capacity. This led to the various designs of multiprogramming such as variable position multiprogramming that helped to utilize the storage capacity much more efficiently (Smith, 1980). 

In the late 1950 and 1960, under the batch processing mode, users were not normally present in the computing facility when their jobs were run. Jobs were generally submitted on punched cards and magnetic tapes. The jobs would remain in the input tables for hours or even days until they could be loaded into the computer for execution. The slightest error in a program, even a missing period or comma, would “dump” the job, at which point the user would correct the error, resubmit the job, and once again wait hours or days before the next execution of the job could be attempted. Software development in such an environment was particularly a slow process (Weizer, 1981). 

University environments provided a fertile ground for dealing with such limitations. Student programs tended not to be uniform from week to week, or from one student to another, and it was important that students received clear messages about what kinds of errors they made. In 1959-1960, a system called MAD (Michigan Algorithmic Decoder) was developed at the University of Michigan. MAD was based on ALGOL, but unlike ALGOL, is took care of details of running a job in ways that few other languages could do.
 MAD offered fast compilation, essential for a teaching environment and it had good diagnostics to help students find and correct errors. These qualities made the system not only attractive to the student programmer but also to various researchers at the University of Michigan Campus (Rosin, 1969). While there were attempts to provide more diagnostics and error-correcting mechanisms by the groups such as those in the University of Michigan, another group tried to develop systems that would allow greater access to the computing systems and reduce the waiting time for jobs to execute. One of the major developments in this direction was timesharing system which enabled many users to share computer resources simultaneously. In the timesharing mode, the computer spends a fixed amount of time on one program before proceeding to another. Each user is allocated a tiny slice of time (say, two milliseconds). The computer performs whatever operations in can for that user in the allocated time and then utilizes the next allocated time for the other users. What made such a concept possible was the difference between the few milliseconds (at least) between a user’s keystrokes and the ability of a computer to fetch and execute dozens, perhaps hundreds of simple instructions. The few seconds a user might pause to ponder the next command to type in was time enough for a computer, even in those days, to let another user’s job to execute, while giving the illusion to each user that the complete machine (including I/O devices) and its software were at his or her disposal. Although this concept seems similar to multiprogramming, in multiprogramming, the computer works on one program until it reaches a logical stopping point, such as an input/output event, while for timesharing system, every job is allocated a specific small time period (Laudon & Laudon, 1997). 

MIT’s Department of Electrical Engineering was one of the pioneers of the timesharing system under the guidance of John McCarthy, Robert Fano and Fernando Corbato. Since 1957, it had been running a computer IBM 704 in a batch-processing mode. However, the instructions of programming and the development of software were very difficult given the long turnaround time, the time between the submission of a job and the return of results, of hours and even days. This motivated them to develop a system that would reduce the turnaround time substantially. This led MIT to implement the first timesharing system in November 1961, called CTSS – Compatible Time-Sharing System. The demonstration version allowed just three users to share the computer at a particular time. It reduced the turnaround time to minutes and later to seconds. It
demonstrated the value of interactive computing as the timesharing system was also called (Crisman, 1964).


Timesharing systems helped facilitate the software development process significantly. With turnaround time reduced to minutes, no longer a person writing a new program had to wait hours or days to correct errors. With timesharing, a programmer could enter a program, compile it, receive a list of syntax errors, correct them
immediately and re-execute this cycle until the program is free of syntax errors thereby reducing development time significantly (Crisman, 1964). Within a year of MIT’s successful demonstration, several other universities,
research organizations and manufacturers, noting the advantages of timesharing system, had begun to develop their own systems. Many of these systems were further evolved into next generation of operating systems. For example, MIT developed Multics operating system as the successor of CTSS. Multics, although was not successful, gave rise to perhaps the most versatile operating system existing even today – the UNIX system. In 1964, IBM also developed CP/CMS system at its Cambridge Scientific Center, a timesharing system for its new System/360 mainframe which eventually became the major operating system – VM operating system – for its System/360 and System/370 computers (Weizer, 1981).


source : Prof. Tim Bergin

History of Operating Systems..part.2

Early History: The 1940s and the 1950s:
In the 1940s, the earliest electronic digital systems had no operating systems. Computers of this time were so primitive compared to those of today that programs were often entered into the computer one bit at a time on rows of mechanical switches. Eventually, machine languages (consisting of strings of the binary digits 0 and 1) were introduced that sped up the programming process (Stern, 1981). The systems of the 1950s generally ran only one job at a time. It allowed only a single person at a time to use the machine. All of the machine’s resources were at the user’s disposal. Billing for the use of the computer was straightforward - because the user had the entire machine, the user was charged for all of the resources whether or not the job used these resources. In fact, usual billing mechanisms were based upon wall clock time. A user was given the machine for some time interval and was charged a flat rate. 


Originally, each user wrote all of the code necessary to implement a particular application, including the highly detailed machine level input/output instructions. Very quickly, the input/output coding needed to implement basic functions was consolidated into an input/output control system (IOCS). Users wishing to perform input/output operations no longer had to code the instructions directly. Instead, they used IOCS routines to do the real work. This greatly simplified and sped up the coding process. The implementation of input/output control system may have been the beginning of today’s concept of operating system. Under this system, the user has complete control over all of  main storage memory and as a result, this system has been known as single user contiguous storage allocation system. Storage is divided into a portion holding input/output control system (IOCS) routiner, a portion holding the user’s program and an unused portion (Milenkovic, 1987). 


Early single-user real storage systems were dedicated to one job for more than the job’s execution time. Jobs generally required considerable setup time during which the operating system loaded, tapes and disk packs were mounted, appropriate forms were placed in the printer, time cards were “punched in,” etc. When jobs completed, they required considerable “teardown” time as tapes and disk packs were removed, time cards were “punched out” etc. During job setup and job teardown, the computer sat idle. Users soon realized that they could cut down the amount of time wasted between the jobs, if they could automate the job-to-job transition. First major such system, considered by many to be the first operating system, was designed by the General Motors Research Laboratories for their IBM 701 mainframe beginning in early 1956 (Grosch, 1977). Its success helped establish batch computing – the groupings of the jobs into a single deck of cards, separated by control cards that instructed computers about the various specification of the job. The programming language that the control cards used was called job control language (JCL). These job control cards set up the job by telling the computer whether the cards following it contain data or programs, what programming language is used, the approximate execution time, etc. When the current job terminated, the job stream reader automatically reads in the control language statements for the next job and performs appropriate housekeeping chores to facilitate the transition to the next job. Batch processing system greatly improved the use of computer systems and helped demonstrate the real value of operating systems by managing resources intensely. This type of processing called single stream batch processing systems became the state-of-theart in the early 1960s (Orchard-Hays, 1961).

source : Prof. Tim Bergin

History Of Operating System..part.1

Operating systems are the software that makes the hardware usable. Hardware
provides “raw computing power.” Operating system makes the computing power conveniently available to users, by managing the hardware carefully to achieve good performance. Operating systems can also be considered to be managers of the resources. An operating system determines which computer resources will be utilized for solving which problem and the order in which they will be used. In general, an operating system has three principal types of functions.

  •  Allocation and assignment of system resources such as input/output devices, software, central processing unit, etc.
  •  Scheduling: This function coordinates resources and jobs and follows certain given priority. 
  • Monitoring: This function monitors and keeps track of the activities in the computer system. It maintains logs of job operation, notifies end-users or computer operators of any abnormal terminations or error conditions. This function also contains security monitoring features such as any authorized attempt to access the system as well as ensures that all the security safeguards are in place (Laudon and Laudon, 1997).

Throughout the history of computers, the operating system has continually evolved as the needs of the users and the capabilities of the computer systems have changed. As Weizer (1981) has noted, operating systems have evolved since the 1940s through a number of distinct generations, which roughly correspond to the decades. Although this observation was made in 1981, this is still roughly valid after two decades. In this paper, we shall also follow the similar approach and discuss the history of operating systems roughly along the decades.


source : Prof. Tim Bergin

Rabu, 11 April 2012

RECOGNITION SYSTEM OPERATION

Operating system is a collection of programs that act as intermediary / penjalin / connecting between users, the software with hardware computer so easy to use computer systems, computer software can efiesien used.


Why Studying the Operating System

  • Is a fundamental study for the education of computer science and informatics.
  • Engineer and computer scientist / informatics absolutely understand the Operating System, because the operating system is like the spirit of man.
Components of Computer Systems
  1. Hardware: CPU, memory, device I / O
  2. Operating system: control and manage the use of computer resources.
  3. Program applications: compilers, database systems, business programs,gamesetc..
  4. User: human, machine and computer.
Operating System
  • Can be ruled hardware, handles the allocation of resources and to protectapplications from direct connection to the hardware.
  • The kernel is the "heart" of the Operating System. Parts that must always beoperated so that the operating system is always live.
Objectives and tasks of the operating system 
  1. Managing all the resources contained on computer systems.
  2.  Provides a set of services to users, giving users more easily and
    comfortable in using or utilizing the resources of computer systems.
All Resource Manager Computer Systems
  • Physical resources: keyboard, mouse, joystick, disk drives, CD ROM, hard disk,printer,
    monitor, modem, ethernet card, multimedia equipment, etc..
  • Resource abstract  (a data and program).
Based on the amount of resources managed by the Operating System ..
  • It is difficult for programmers to create programs in resource-resources. 
  • To simplify the programmer in order to be making the program understandable by all levels of resourcesrequired Operating System.
Main Components of Operating Systems
  1. process management
  2. Memory management
  3. Management Input / Output
  4. File management
  5. Protection system
  6. Networking
  7. Command interpreter system
Services provided by the operating system
  1.  Program execution
  2.  I/O Operation
  3. File/system manipulation
  4. Communications
  5. Error detection
  6. Resource allocation
  7. Accounting
  8. Protection
Position Operating Systems on the Computer Systems
  1. For Laity User = Looking at computer systems as an application to solve his problems. Ordinary users can only know the command language to call or make an application program that use.
  2. For view of Programmer =  To express in making an application programming language.
  3. For Operating Systems Designers = Operating system in charge of making the computer hardware in order to appear beautiful, easy and convenient for programmer.
Basic Structure of Operating Systems
  1. Monolithic system
    Operating system as a set of procedures where the procedure can be called byother procedures when necessary.
  2. Layered system
  3. Systems with virtual machines
  4. Process collection system
  5. Object-oriented systems
  6. Client server system

Source : Sistem Operasi, Ir. Bambang Hariyanto.