Why Objects?

by Jecel Mattos de Assumpcao Jr.
published in Byte Brasil, September 1993, Vol 4 #9, pages 69-71
translated into English and made available here with permission from Byte Brasil
An impressive number of magazine covers, articles and books have been dedicated to Object Oriented Programming (OOP) in the last few years. So, is there anything left to say about it? The answer, unfortunately, is a great YES as most programmers still have only a rather vague idea about what it all means.

Almost all papers about OOP talk about how it works - classes, subclasses, polymorphism, etc.. These things are fundamental, but don't help programmers, system administrators and users in general to understand WHY OOP works.

Information technology, as its name indicates, works with information. Traditionally, information is divided into data ( that indicate system state ) and programs ( information about how to manipulate data - how to transform one state into another ). There is an interesting side effect: programs manipulate information, but programs are information; so we can create programs that manage programs - the software development systems. The use of programs to write programs allow programming techniques to move from abstract philosophies to become the base for programming languages, operating systems, user interfaces and much more.

OOP insists that a program is intrinsically dependent of the data it manipulates and should be bound with it in a package called "object". Of the advanced programming techniques ( Logic Programming, Functional Programming and OOP ), these objects are much closer to the traditional Structured Programming and, especially, Modular Programming. This familiarity is a major attraction of this technique, but is also its main problem. How many think that they have understood OOP because they have replaced RECORD with OBJECT in Pascal or now use "//" to start comments in C++? The initial enthusiasm moved by exaggerated promised ( a little like: OOP will end hunger in the world! ) soon gives way to a huge letdown - is this all there is to it? Anyone who thinks that it is just a simple name switch ( they call it method instead of subroutines and class instead of type ) hasn't understood what OOP is all about, yet. Maybe the three ideas explained here might help convince those "vaccinated" against OOP to give it a second chance.


One of OOP's great advantages, not always appreciated, is the ability to give everything a name. Whether it is a symbolic name, a memory address, screen coordinates or a table index, there is some way to identify an object and everything it represents. All information related to a certain concept ( the letter "F", for example or the color blue ) can be found indirectly starting with this name. A traditional system might have, in theory, the same information ( only spread out in several parts or even over a few programs ) but because it lacks a single name for it, its automatic handling is impossible.

We can find an analogy in the area of computer illustration: there are painting programs and the ( so called ) object oriented drawing applications. The former work with colored pixels while the latter with geometric figures. The images that can be produced with products from the two categories are the same and even their user interface can look alike to a novice user. To make a red circle on the screen, the paint program simply turns on the right pixels while the drawing program will store the position, color and size in memory and then turns on the pixels necessary to show it on the monitor. The result is the same, visually, but if we decide to move the circle, resize it or make it transparent, we will have a lot of work ahead of us with the paint application as there is no way to refer to the collection of dot by a name that the computer "knows". The drawing program, on the other hand, will highlight the circle's border when we point to it to indicate that it has understood the visual name "this object under the cursor". So any of the above mentioned operations is simple and fast. Of course, paint programs have their good points too, or they would no longer be in use ( just try to make a circle that looks like it was bitten ), but then our analogy is no longer valid.

Certain ancient peoples believed that whoever found out a person's name gained magical powers over them. In the world of computers, this is an undeniable truth. OOP makes possible commands like "duplicate object 9807", "how many text objects exist in the system?", "show the algorithm used to sort dates" and many others that have no equal in traditional programming.


The movie TRON, from Disney studios, showed and adventure "inside the computer" where the hero met programs in the form of people. This anthropomorphism was certainly not inspired by computers then available, but is strangely appropriate for the world of objects. Simula, the first object oriented programming language, was an extension of Algol for programming, as it name says, simulations. Any system is more easily simulated by a program if it is structured in a similar way to the real system, composed of objects that correspond to the physical objects. The software objects work by sending messages to other objects, so look more like intelligent beings working together than salaries, checks and other inanimate simulation objects ( and every program can be seen as a simulation and use its techniques - another reason why OOP works so well in practice ).

For thousands of years, complex human organizations have been developed and made to work. Though there are other models of large systems, this one has the advantage of familiarity. Traditional programming is very much like writing a paper like this one, but OOP is more similar to creating a play. The result is much better if the author can really place himself in the character's shoes, making each one come alive. When we create new objects, these become our helpers after they "learn their role" as they go about their work independently while we worry about another part of the system.

The analogy between object oriented programming and human organizations raises the following question: is there no way to avoid bureaucracy? OOP has a reputation for being inefficient in the use of computational resources, and the greater complexity of message passing ( in certain implementations ) than traditional subroutines is only part of the problem. A programmer that studies a system ( written in Smalltalk, for example ) for the first time becomes impressed that an object acts on a message by sending new messages to other objects that, in their turn, send other messages. The frustration is understandable - when does anything really happen? In Smalltalk's case this process ends when one of the dozens of primitive ( basic ) functions is called. What about this huge amount of messages back and forth? Aren't they just useless overhead?

In a big company, the president decides to buy some other firm's stocks to implement a strategy the he is brewing. He explains his decision to the financial director and she phones an agent and then writes a check for the correct amount. She gives this check to a secretary who fills out a deposit slip and gives it to the office-boy. He takes these to the bank and hands them to the teller who, pressing some keys in her terminal, starts a complex process that doesn't really matter here. A careful examination of this whole process shows that the teller's actions are what "really make things happen", but it would hardly be reasonable to expect that the system would be more efficient if the company's president went personally to the bank and did everything himself ( a very big problem with our analogy is that humans work at the same time while a processor in a computer can only animate one object at a time. In the future, of course, computers may have many processors and eliminate this difference, but this is another story... ). Our company president might not know exactly what happens when he wants to buy some stocks, just that the results is the expected one. In the same way, the back teller doesn't have to understand her terminal's internal details nor how the deposit she is making will help a company to grow. A city could be defined as a collection of bricks, but who can fully understand a complex level at such a detailed level? Objects are used to break up problems into blocks of scale we can assimilate. We can understand each block individually by imagining ourselves in its place. We can understand how the parts fit together to build the whole.

A C program reflects the structure of the hardware on which it will run. An object oriented program, on the other hand, is closer to the programmer, though this comes at a certain cost ( which is being drastically reduced by new technologies being developed and which is becoming less important as the performance of the hardware grows ). With our ambitions in terms of software systems growing daily, we shall soon find out that a program that is a little slow is better than one that doesn't exist.

Pre-fab Software

Some people like to point with pride to a system and say: "This has three million lines of code!". This sounds more like a nightmare to me. If they were 300 thousand lines, I would breath easier. If they told that of those, only 3 thousand had to be written and the rest was borrowed from other projects I would happily answer that that was my kind of program. To really be the way I like it, most of these 3,000 lines would be comments!

A hardware designer starts his work by studying catalogs full of ready made chips and the programmer by staring at a cursor blinking in an empty screen. Is the current relation between the state of the art in software and hardware any surprise? The programmer is like a painter who has to mix his own paints or a plumber who insists in making his own parts from raw iron and copper. Lines of code that don't need to be written are lines of code that don't need to be thought out, typed, tested, rewritten, etc...

But haven't we been using software libraries since the dawn of computers? This is better than nothing, but very limited. The use of names, already mentioned, as well as techniques like polymorphism, inheritance and others described in so many texts enable OOP to allow software reuse in a completely new scale. Brad Cox, Objective C's creator, described the idea of "software integrated circuits" as an excellent analogy. Software reuse isn't just a technical problem, but also an economical, organizational and even cultural one. It is very frustrating for someone who has always done everything to put up with components that represent other people's ideas and styles and that never fit perfectly in the current project. Anyone who uses microprocessors, memories or even pre-made drains also has to put with the same limitations but is already used to it. Only with ready-made blocks bought and reused will we able to speak of a software "industry" instead of the current cottage production.

Object oriented programming is a vast subject. The aspects presented here ( names, anthropomorphism and software reuse ) are only a few of the reasons why this technique is so promising for the future of computers.

see also:
| jabs1 | | jpaper6 | | jabs2 | | jabs3 | | jabs4 | | jabs5 | | jabs7 | | jpaper8 |
back to:
| jpapers | | jecel | | LSI | | USP |

please send comments to jecel@lsi.usp.br (Jecel Mattos de Assumpcao Jr), who changed this page on Sep 1, 12:09 .