Problem Solving and Programming Logic

Computer Programs: The Power of Logic

A single program addresses a particular problem. When you write a program, you are solving a problem.

To solve a problem you must use your power of logic and develop an algorithm, or procedure, for solving the problem.

The algorithm is the finite set of step-by-step instructions that convert the input into the desired output, that is, solve the problem.

Structured Program Design: Divide and Conquer

Given a task, programmers were left on their own to create a solution any way they could.

Three major problems arose from this free-form method:
1.Long development time,
2.High maintenance cost, and
3.Low-quality software.

Structured programming stresses the systematic design and management of the program development process.

illustrates a common programming problem:

The printing of weekly payroll checks for hourly and commission employees. In the figure a structure chart is used to break the programming problem into a hierarchy of tasks. The most effective programs are designed to be written in modules, or independent task.

It is much easier to address a complex programming problem in small, more manageable modules than as one big task by using the principles of structured programming.

In structured programming, the logic of the program is addressed hierarchically in logical modules.

BY dividing the program into modules, the structured approach to programming reduces the complexity of the programming task.

Some programs are so complex that if taken as a single task.

Goals of structured programming :

Decrease program development time by increasing programmer productivity and reducing the
time needed to test and debug a program.

Decrease program maintenance costs by reducing errors and making program code easier to understand.

Improve the quality of software by providing programs with fewer errors.

Structured programming accomplishes these goals by incorporating these concepts:

  1. Top-down design and use of modules.
  2. Use of limited control structures (sequence, selection, and repetition).
  3. Management control.

Top-down design starts with the major functions involved in a problem and divides them into subfunctions until the problem has been divided as much as possible.

Each unit is small enough to be programmed by an individual programmer in the required time frame.

This forces an examination of all aspects of a problem on one level before considering the next level.

A programmer is left with small groups, or modules, of processing instructions, which are easy to understand and code.

A program consists of a main logic module that controls the execution of the other modules in the program.

Working from the top down avoids solutions that deal with only part of a problem.

A program that uses a main logic module to control smaller modules is easier to read, test, and maintain.

In structured programming, modules ensure these qualities by:

-having only one entrance and one exit

-performing only one program function

-returning control to the module from which it was received

Programming In Perspective

What Is Computer Programming ?

Computer programming involves writing instructions and giving them to the computer so it can complete a task.

A computer program, or software, is a set of instructions written in a computer language, intended to be executed by a computer to perform a useful task.

The application packages such as word processors, spreadsheets, and database management systems are computer programs.

A programmer is an individual who translates the tasks that you want a computer to accomplish into a form the computer understands.

What Are The Qualities of a Well-Designed Program?

� Correct and accurate
� Easy to understand
� Easy to maintain and update
� Efficient
� Reliable
� Flexible

Brief of Mac OS


Mac OS is Apple Computer's operating system for Apple Macintosh computers. Mac OS was the first commercially successful operating system which used a graphical user interface. The Macintosh team included Bill Atkinson and Jef Raskin. There are a variety of views on how the Macintosh was developed, and where the underlying ideas originated. While the connection between the Macintosh and the Alto project at Xerox PARC has been established in the historical record, the earlier contributions of Ivan Sutherland's Sketchpad and Doug Engelbart's On-Line System are no less significant. See History of the GUI, and Apple v. Microsoft.

The Mac OS can be divided into two families of operating systems:
An older and now unsupported "classic" Mac OS (the system that shipped with the first Mac in 1984 and its descendants, culminating with Mac OS 9).
The newer Mac OS X (pronunced oh-es-ten). Mac OS X incorporates elements of BSD Unix, OPENSTEP, and Mac OS 9. Its low-level UNIX-based foundation, Darwin, is open source.

Classic Mac OS

The "classic" Mac OS is characterized by its total lack of a command line; it is a 100% graphical operating system. Heralded for its ease of use, it is also criticized for its almost total lack of memory management, cooperative multitasking, and susceptibility to extension conflicts. "Extensions" are program modules that extend the operating system, providing additional functionality (such as a networking) or support for a particular device. Some extensions are prone not to work properly together, or only when loaded in a particular order. Troubleshooting Mac OS extensions can be a time-consuming process. The MacOS also introduced a new type of filesystem, which contained two different "forks" for a file. It was innovative at the time for separating out parameters into the resource fork, and raw data in the "data fork". However, it became quite a challenge to interoperate with other operating systems which did not recognize such a system.

The term "Mac OS" was not officially used until 1996 with the release of Mac OS 7.6 - prior to that the Macintosh operating system software was simply known as "The System", or by its version number, e.g. System 6 or System 7. Another common term was "the Toolbox". Apple deliberately played down the existence of the operating system in the early years of the Mac to help make the machine appear more user-friendly and to distance it from other systems such as MS-DOS, which were portrayed as arcane and technically challenging. With Mac, you turned it on, it just worked.

By the late 1990s, it was clear the useful life of this 1980s-era technology was coming to an end, with other more stable multitasking operating systems being developed.

Mac OS X

Mac OS X remedied this situation, bringing Unix-style memory management and preemptive multitasking. Improved memory management allowed more programs to run at once and virtually eliminated the possibility of one program crashing another. It is also the first Mac OS to include a command line, although it is never seen unless a separate "terminal" program is launched. However, since these new features put higher demands on system resources, Mac OS X is only officially supported on G3 and newer processors. (It runs poorly on many early G3 machines). Mac OS X has a compatibility layer for running older Mac applications, but compatibility is not 100%.

Mac OS Technologies

QuickDraw: the imaging model which first provided mass-market WYSIWYG.

Finder: the interface for browsing the filesystem and launching applications.

MultiFinder: the first version to support simultaneously running multiple apps.

Chooser: tool for accessing network resources (e.g., enabling AppleTalk).

ColorSync: technology for ensuring appropriate color matching.

Mac OS memory management: how the Mac managed RAM and virtual memory before the switch to UNIX.

PowerPC emulation of Motorola 68000: how the Mac handled the architectural transition from

CISC to RISC (see Mac 68K emulator).

Desk Accessories - small "helper" apps that could be run concurrently with any other app, prior to the advent of MultiFinder or System 7.


New Virus in April

An extraordinary behind-the-scenes struggle is taking place between computer security groups around the world and the brazen author of a malicious software program called Conficker.

The program grabbed global attention when it began spreading late last year and quickly infected millions of computers with software code that is intended to lash together the infected machines it controls into a powerful computer known as a botnet.

Since then, the program�s author has repeatedly updated its software in a cat-and-mouse game being fought with an informal international alliance of computer security firms and a network governance group known as the Internet Corporation for Assigned Names and Numbers. Members refer to the alliance as the Conficker Cabal.

The existence of the botnet has brought together some of the world�s best computer security experts to prevent potential damage. The spread of the malicious software is on a scale that matches the worst of past viruses and worms, like the I Love You virus. Last month, Microsoft announced a $250,000 reward for information leading to the capture of the Conficker author.

Botnets are used to send the vast majority of e-mail spam messages. Spam in turn is the basis for shady commercial promotions including schemes that frequently involve directing unwary users to Web sites that can plant malicious software, or malware, on computers.

Botnets can also be used to distribute other kinds of malware and generate attacks that can take commercial or government Web sites off-line.

One of the largest botnets tracked last year consisted of 1.5 million infected computers that were being used to automate the breaking of �captchas,� the squiggly letter tests that are used to force applicants for Web services to prove they are human.

The inability of the world�s best computer security technologists to gain the upper hand against anonymous but determined cybercriminals is viewed by a growing number of those involved in the fight as evidence of a fundamental security weakness in the global network.

�I walked up to a three-star general on Wednesday and asked him if he could help me deal with a million-node botnet,� said Rick Wesson, a computer security researcher involved in combating Conficker. �I didn�t get an answer.�

An examination of the program reveals that the zombie computers are programmed to try to contact a control system for instructions on April 1. There has been a range of speculation about the nature of the threat posed by the botnet, from a wake-up call to a devastating attack.

Researchers who have been painstakingly disassembling the Conficker code have not been able to determine where the author, or authors, is located, or whether the program is being maintained by one person or a group of hackers. The growing suspicion is that Conficker will ultimately be a computing-for-hire scheme. Researchers expect it will imitate the hottest fad in the computer industry, called cloud computing, in which companies like Amazon, Microsoft and Sun Microsystems sell computing as a service over the Internet.

Earlier botnets were devised so they could be split up and rented via black market schemes that are common in the Internet underground, according to security researchers.

The Conficker program is built so that after it takes up residence on infected computers, it can be programmed remotely by software to serve as a vast system for distributing spam or other malware.

Several people who have analyzed various versions of the program said Conficker�s authors were obviously monitoring the efforts to restrict the malicious program and had repeatedly demonstrated that their skills were at the leading edge of computer technology.

For example, the Conficker worm already had been through several versions when the alliance of computer security experts seized control of 250 Internet domain names the system was planning to use to forward instructions to millions of infected computers.

Shortly thereafter, in the first week of March, the fourth known version of the program, Conficker C, expanded the number of the sites it could use to 50,000. That step made it virtually impossible to stop the Conficker authors from communicating with their botnet.

�It�s worth noting that these are folks who are taking this seriously and not making many mistakes,� said Jose Nazario, a member of the international security group and a researcher at Arbor Networks, a company in Lexington, Mass., that provides tools for monitoring the performance of networks. �They�re going for broke.�

Several members of the Conficker Cabal said that law enforcement officials had been slow to respond to the group�s efforts, but that a number of law enforcement agencies were now in �listen� mode.

�We�re aware of it,� said Paul Bresson, an F.B.I. spokesman, �and we�re working with security companies to address the problem.�

A report scheduled to be released Thursday by SRI International, a nonprofit research institute in Menlo Park, Calif., says that Conficker C constitutes a major rewrite of the software. Not only does it make it far more difficult to block communication with the program, but it gives the program added powers to disable many commercial antivirus programs as well as Microsoft�s security update features.

�Perhaps the most obvious frightening aspect of Conficker C is its clear potential to do harm,� said Phillip Porras, a research director at SRI International and one of the authors of the report. �Perhaps in the best case, Conficker may be used as a sustained and profitable platform for massive Internet fraud and theft.�

�In the worst case,� Mr. Porras said, �Conficker could be turned into a powerful offensive weapon for performing concerted information warfare attacks that could disrupt not just countries, but the Internet itself.�

The researchers, noting that the Conficker authors were using the most advanced computer security techniques, said the original version of the program contained a recent security feature developed by an M.I.T. computer scientist, Ron Rivest, that had been made public only weeks before. And when a revision was issued by Dr. Rivest�s group to correct a flaw, the Conficker authors revised their program to add the correction.

Although there have been clues that the Conficker authors may be located in Eastern Europe, evidence has not been conclusive. Security researchers, however, said this week that they were impressed by the authors� productivity.

Source New York Times

Solaris History

Solaris is the Unix-based operating system developed by Sun Microsystems, displays that company's ability to be innovative and flexible. Solaris, one could argue, is perpetually ahead of the curve in the computer world. Sun continually adapts to the changing computer environment, trying to anticipate where the computer world is going, and what will be needed next, and develops new versions of Solaris to take that into account.


Solaris was born in 1987 out of an alliance between AT&T and Sun Microsystems to combine the leading Unix versions (BSD, XENIX, and System V) into one operating system. Four years later in 1991, Sun replaced it's existing Unix operating system (SunOS 4) with one based on SVR4. This new OS, Solaris 2, contained many new advances, including use of the OpenWindows graphical user interface, NIS+, Open Network Computing (ONC) functionality, and was specially tuned for symmetric multiprocessing.


This kicked off Solaris' history of constant innovation, with new versions of Solaris being released almost annually over the next fifteen years. Sun was constantly striving to stay ahead of the curve, while at the same time adapting Solaris to the existing, constantly evolving wider computing world. The catalogue of innovations in the Solaris OS are too numerous to list here, but a few milestones are worth mentioning. Solar 2.5.1 in 1996 added CDE, the NFSv3 file system and NFS/TCP, expanded user and group IDs to 32 bits, and included support for the Macintosh PowerPC platform. Solaris 2.6 in 1997 introduced WebNFS file system, Kerberos 5 security encryption, and large file support to increase Solaris' internet performance.


Solaris 2.7 in 1998 (renamed just Solaris 7) included many new advances, such as native support for file system meta-data logging (UFS logging). It was also the first 64-bit release, which dramatically increased its performance, capacity, and scalability. Solaris 8 in 2000 took it a step further was the first OS to combine datecentre and dot-com requirements, offering support for IPv6 and IPSEC, Multipath I/O, and IPMP. Solaris 9 in 2002 saw the writing on the wall of the server market, dropped OpenWindows in favour of Linux compatibility, and added a Resource Manager, the Solaris Volume Manager, extended file attributes, and the iPlanet Directory Server.


Solaris 10, the current version, was released to the public in 2005 free of charge and with a host of new developments. The latest advances in the computing world are constantly being incorporated in new versions of Solaris 10 released every few months. To mention just a few, Solaris features more and more compatibility with Linux and IBM systems, has introduced the Java Desktop System based on GNOME, added Dynamic Tracing (Dtrace), NFSv4, and later the ZFS file system in 2006.


Also in 2006, Sun set up the OpenSolaris Project. Within the first year, the OpenSolaris community had grown to 14,000 members with 29 user groups globally, working on 31 active projects. Although displaying a deep commitment to open-source ideals, it also provides Sun with thousands of developers essentially working for free.


The development of the Solaris OS demonstrates Sun Microsystems' ability to be on the cutting edge of the computing world without losing touch with the current computing environment. Sun regularly releases new versions of Solaris incorporating the latest development in computer technology, yet also included more cross-platform compatibility and incorporating the advances of other systems. The OpenSolaris project is the ultimate display of these twin strengths-Sun has tapped into the creative energy of developers across the world and receives instant feedback about what their audience wants and needs. If all software companies took a lesson from Sun, imagine how exciting and responsive the industry could be.

Linux History

In order to know the popularity of linux, we need to travel back in time. In earlier days, computers were like a big house, even like the stadiums. So there was a big problem of size and portability. Not enough, the worst thing about computers is every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators. Also those computers were quiet expensive. Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1960, a team of developers in the Bell Labs laboratories started working on a solution or the software problem, to address these compatibility issues. They developed a new operating system, which was simple, elegant , written in C Programming language instead of Assebly language and most important is it can be able to recycle the code. The Bell Labs developers named their this project as " UNIX ".

Unix was developed with small piece of code which is named as kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware. So this naturally affected the cost of Unix operating system, the vendors used to sell the software ten times than the original cost. The source code of Unix, once taught in universities courtesy of Bell Labs, was not published publicly. So developers tried to find out some solution to to provide an efficient solution to this problem.

A solution seemed to appear in form of MINIX. It was written from scratch by Andrew S. Tanenbaum, a US-born Dutch professor who wanted to each his students the inner workings of a real operating system. It was designed to run on the Intel 8086 microprocessors that had flooded the world market.

As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available. Anyone who happened to get the book 'Operating Systems: Design and Implementation' by Tanenbaum could get hold of the 12,000 lines of code, written in C and assembly language. For the first time, an aspiring programmer or hacker could read the source codes of the operating system, which to that time the software vendors had guarded vigorously. A superb author, Tanenbaum captivated the brightest minds of computer science with the elaborate lively discussion of the art of creating a working operating system. Students of Computer Science all over the world worked hard over the book, reading through the codes to understand the very system that runs their computer.

And one of them was Linus Torvalds. Linus Torvalds was the second year student of Computer Science at the University of Helsinki and a self- taught hacker. MINIX was good, but still it was simply an operating system for the students, designed as a teaching tool rather than an industry strength one. At that time, programmers worldwide were greatly inspired by the GNU project by Richard Stallman, a software movement to provide free and quality software. In the world of Computers, Stallman started his awesome career in the famous Artificial Intelligence Laboratory at MIT, and during the mid and late seventies, created the Emacs editor.

In the early eighties, commercial software companies lured away much of the brilliant programmers of the AI lab, and negotiated stringent nondisclosure agreements to protect their secrets. But Stallman had a different vision. His idea was that unlike other products, software should be free from restrictions against copying or modification in order to make better and efficient computer programs. With his famous 1983 manifesto that declared the beginnings of the GNU project, he started a movement to create and distribute softwares that conveyed his philosophy (Incidentally, the name GNU is a recursive acronym which actually stands for 'GNU is Not Unix'). But to achieve this dream of ultimately creating a free operating system, he needed to create the tools first. So, beginning in 1984, Stallman started writing the GNU C Compiler (GCC), an amazing feat for an individual programmer. With his smart technical skills, he alone outclassed entire groups of programmers from commercial software vendors in creating GCC, considered as one of the most efficient and robust compilers ever created.

Linus himself didn't believe that his creation was going to be big enough to change computing forever. Linux version 0.01 was released by mid September 1991, and was put on the net. Enthusiasm gathered around this new kid on the block, and codes were downloaded, tested, tweaked, and returned to Linus. 0.02 came on October 5th.

Further Development

While Linux development, Linus faced some of the difficulties such as cross opinions with some people. E.g. Tanenbaum the great teacher who wrote the MINIX. He sent the letter to Linus as :-

�I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design " Linus later admitted that it was the worst point of his development of Linux. Tanenbaum was certainly the famous professor, and anything he said certainly mattered. But he was wrong with Linux, for Linus was one stubborn guy who never like defeats. Although, Tanenbaum also remarked that �Linux is obsolete.� So very soon thousands of people form a community and all joined the camp. Powered by programs from the GNU project, Linux was ready for the actual showdown. It was licensed under GNU General Public License, thus ensuring that the source codes will be free for all to copy, study and to change. Students and computer programmers grabbed it.

Everyone tried and edited the source code and then it gives the start for commercial vendors to start their market. They compiled various software and distributed them with that operating system which people are familiar with. Red Hat, Debian gained more response from outside world. With the new graphical interface system like KDE, GNONE the linux becomes popular. The best thing today about Linux is it's powerful commands.

Rise of the Desktop Linux

What is the biggest complaint about Linux ??? That is it's Text mode. Many people get scared of seeing the command base interface which is not understandable. But if anyone starts learning the commands, it goes on interesting topics to learn new about the Operating System. Still now, very friendly GUI's are available for it's flexibility. Anyone can install the Linux without having the prior experience. Everything is well explanatory at the time of installation. Most distributions are also available in Live CD format, which the users can just put in their CD drives and boot without installing it to the hard drive, making Linux available to the newbies. The most important point about Linux is it's open source. So Computer users having low budget can have Linux and learn linux as it is free.

Linux's Logo - Penguin

The logo of Linux is Penguin. It's called as Tux in technological world. Rather Tux, as the penguin is lovingly called, symbolizes the carefree attitude of the total movement. This cute logo has a very interesting history. As put forward by Linus, initially no logo was selected for Linux. Once Linus went to the southern hemisphere on a vacation. There he encountered a penguin,not unlike the current logo of Linux. As he tried to pat it, the penguin bit his hand. This amusing incident led to the selection of a penguin as the logo of Linux sometime later.

Unix History

Since it began to escape from AT&T's Bell Laboratories in the early 1970's, the success of the UNIX operating system has led to many different versions: recipients of the (at that time free) UNIX system code all began developing their own different versions in their own, different, ways for use and sale. Universities, research institutes, government bodies and computer companies all began using the powerful UNIX system to develop many of the technologies which today are part of a UNIX system.


Computer aided design, manufacturing control systems, laboratory simulations, even the Internet itself, all began life with and because of UNIX systems. Today, without UNIX systems, the Internet would come to a screeching halt. Most telephone calls could not be made, electronic commerce would grind to a halt and there would have never been "Jurassic Park"!


By the late 1970's, a ripple effect had come into play. By now the under- and post-graduate students whose lab work had pioneered these new applications of technology were attaining management and decision-making positions inside the computer system suppliers and among its customers. And they wanted to continue using UNIX systems.


Soon all the large vendors, and many smaller ones, were marketing their own, diverging, versions of the UNIX system optimized for their own computer architectures and boasting many different strengths and features. Customers found that, although UNIX systems were available everywhere, they seldom were able to interwork or co-exist without significant investment of time and effort to make them work effectively. The trade mark UNIX was ubiquitous, but it was applied to a multitude of different, incompatible products.


In the early 1980's, the market for UNIX systems had grown enough to be noticed by industry analysts and researchers. Now the question was no longer "What is a UNIX system?" but "Is a UNIX system suitable for business and commerce?"


Throughout the early and mid-1980's, the debate about the strengths and weaknesses of UNIX systems raged, often fuelled by the utterances of the vendors themselves who sought to protect their profitable proprietary system sales by talking UNIX systems down. And, in an effort to further differentiate their competing UNIX system products, they kept developing and adding features of their own.


In 1984, another factor brought added attention to UNIX systems. A group of vendors concerned about the continuing encroachment into their markets and control of system interfaces by the larger companies, developed the concept of "open systems."


Open systems were those that would meet agreed specifications or standards. This resulted in the formation of X/Open Company Ltd whose remit was, and today in the guise of The Open Group remains, to define a comprehensive open systems environment. Open systems, they declared, would save on costs, attract a wider portfolio of applications and competition on equal terms. X/Open chose the UNIX system as the platform for the basis of open systems.


Although UNIX was still owned by AT&T, the company did little commercially with it until the mid-1980's. Then the spotlight of X/Open showed clearly that a single, standard version of the UNIX system would be in the wider interests of the industry and its customers. The question now was, "which version?".


In a move intended to unify the market in 1987, AT&T announced a pact with Sun Microsystems, the leading proponent of the Berkeley derived strain of UNIX. However, the rest of the industry viewed the development with considerable concern. Believing that their own markets were under threat they clubbed together to develop their own "new" open systems operating system. Their new organization was called the Open Software Foundation (OSF). In response to this, the AT&T/Sun faction formed UNIX International.


The ensuing "UNIX wars" divided the system vendors between these two camps clustered around the two dominant UNIX system technologies: AT&T's System V and the OSF system called OSF/1. In the meantime, X/Open Company held the center ground. It continued the process of standardizing the APIs necessary for an open operating system specification.


In addition, it looked at areas of the system beyond the operating system level where a standard approach would add value for supplier and customer alike, developing or adopting specifications for languages, database connectivity, networking and mainframe interworking. The results of this work were published in successive X/Open Portability Guides.


XPG 4 was released in October 1992. During this time, X/Open had put in place a brand program based on vendor guarantees and supported by testing. Since the publication of XPG4, X/Open has continued to broaden the scope of open systems specifications in line with market requirements. As the benefits of the X/Open brand became known and understood, many large organizations began using X/Open as the basis for system design and procurement. By 1993, over $7 billion had been spent on X/Open branded systems. By the start of 1997 that figure has risen to over $23 billion. To date, procurements referencing the Single UNIX Specification amount to over $5.2 billion.


In early 1993, AT&T sold it UNIX System Laboratories to Novell which was looking for a heavyweight operating system to link to its NetWare product range. At the same time, the company recognized that vesting control of the definition (specification) and trademark with a vendor-neutral organization would further facilitate the value of UNIX as a foundation of open systems. So the constituent parts of the UNIX System, previously owned by a single entity are now quite separate


In 1995 SCO bought the UNIX Systems business from Novell, and UNIX system source code and technology continues to be developed by SCO.


In 1995 X/Open introduced the UNIX 95 brand for computer systems guaranteed to meet the Single UNIX Specification. The Single UNIX Specification brand program has now achieved critical mass: vendors whose products have met the demanding criteria now account for the majority of UNIX systems by value.


For over ten years, since the inception of X/Open, UNIX had been closely linked with open systems. X/Open, now part of The Open Group, continues to develop and evolve the Single UNIX Specification and associated brand program on behalf of the IT community. The freeing of the specification of the interfaces from the technology is allowing many systems to support the UNIX philosophy of small, often simple tools , that can be combined in many ways to perform often complex tasks. The stability of the core interfaces preserves existing investment, and is allowing development of a rich set of software tools. The Open Source movement is building on this stable foundation and is creating a resurgence of enthusiasm for the UNIX philosophy. In many ways Open Source can be seen as the true delivery of Open Systems that will ensure it continues to go from strength to strength.

OS/2

A family of multitasking operating systems for x86 machines from IBM. OS/2 Warp is the client version, and Warp Server is the server version. With add-ons, DOS and Windows applications can also be run under OS/2 (see Odin). The server version includes advanced features such as the journaling file system (JFS) used in IBM's AIX operating system. Like Windows, OS/2 provides a graphical user interface and a command line interface. See OS/2 Warp, Warp Server and eComStation.

Although highly regarded as a robust operating system, OS/2 never became widely used. However, it has survived in the banking industry, especially in Europe, and many ATM machines in the U.S. have continued to run OS/2 due to its stability.

Features

OS/2 includes Adobe Type Manager for rendering Type 1 fonts on screen and providing PostScript output on non-PostScript printers. OS/2's dual boot feature allows booting up into OS/2 or DOS.
The OS/2 Workplace Shell graphical user interface is similar to Windows and the Macintosh. Originally known as Presentation Manager (PM), after Version 2.0, PM referred to the programming interface (API), not the GUI interface itself.

Evolution

The first versions of OS/2 were single-user operating systems written for 286s and jointly developed by IBM and Microsoft. Starting with Version 2.0, versions were written for 32-bit 386s and up and were solely the product of IBM. Following is some of the evolution:

OS/2 16-bit Version 1.x

The first versions (1.0, 1.1, etc.) were written for the 16-bit 286. DOS compatibility was limited to about 500K. Version 1.3 (OS/2 Lite) required 2MB RAM instead of 4MB and included Adobe Type Manager. IBM's Extended Edition version included Communications Manager and Database Manager.

OS/2 32-bit Version 2.x - IBM

Introduced in April 1992, this 32-bit version for 386s from IBM multitasked DOS, Windows and OS/2 applications. Data could be shared between applications using the clipboard and between Windows and PM apps using the DDE protocol. Version 2.x provided each application with a 512MB virtual address space that allowed large tasks to be easily managed.
Version 2.1 supported Windows' Enhanced Mode and applications could take full advantage of Windows 3.1. It also provided support for more video standards and CD-ROM drives.
Communications and database management for OS/2 were provided by Communications Manager/2 (CM/2) and Database Manager/2 (DB2/2). CM/2 replaced Communications Manager, which was part of OS/2 2.0's Extended Services option.

OS/2 32-bit Version 3 - IBM

In late 1994, IBM introduced Version 3 of OS/2, renaming it OS/2 Warp. The first version ran in only 4MB of memory and included a variety of applications, including Internet access.

Windows NT - Microsoft

Originally to be named OS/2 Version 3.0, this 32-bit version from Microsoft was renamed "Windows NT" and introduced in 1993. See Windows NT.
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. Information Computer and Technology - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger