Tuesday, December 16, 2008

Default gateway

A gateway is a node (a router) on a computer network that serves as an access point to another network.
A Default Gateway (Def.GW) is the node on the computer network that is chosen when the IP address does not belong to any other entities in the Routing Table.
In homes, the gateway is usually the ISP-provided device that connects the user to the Internet, such as a DSL or cable modem.
In enterprises, however, the gateway is the node that routes the traffic from a workstation to another network segment. The default gateway is commonly used to be the node connecting the internal networks and the outside network (Internet). In such a situation, the gateway node could act as a proxy server and a firewall. The gateway is also associated with both a router, which uses headers and forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the packet in and out of the gateway.
In other words, it is an entry point and an exit point in a network.
Usage
A default gateway is used by a host when an IP packet's destination address belongs to someplace outside the local subnet (thus requiring more than one hop of Ethernet communication). The default gateway address is usually an interface belonging to the LAN's border router.

Applications affected by NAT

Some Application Layer protocols (such as FTP and SIP) send explicit network addresses within their application data. FTP in active mode, for example, uses separate connections for control traffic (commands) and for data traffic (file contents). When requesting a file transfer, the host making the request identifies the corresponding data connection by its network layer and transport layer addresses. If the host making the request lies behind a simple NAT firewall, the translation of the IP address and/or TCP port number makes the information received by the server invalid. The Session Initiation Protocol (SIP) controls Voice over IP (VoIP) communications and suffers the same problem . SIP may use multiple ports to set up a connection and transmit voice stream via RTP. IP addresses and port numbers are encoded in the payload data and must be known prior to the traversal of NATs. Without special techniques, such as STUN, NAT behavior is unpredictable and communications may fail.
Application Layer Gateway (ALG) software or hardware may correct these problems. An ALG software module running on a NAT firewall device updates any payload data made invalid by address translation. ALGs obviously need to understand the higher-layer protocol that they need to fix, and so each protocol with this problem requires a separate ALG.
Another possible solution to this problem is to use NAT traversal techniques using protocols such as STUN or ICE or proprietary approaches in a session border controller. NAT traversal is possible in both TCP- and UDP-based applications, but the UDP-based technique is simpler, more widely understood, and more compatible with legacy NATs. In either case, the high level protocol must be designed with NAT traversal in mind, and it does not work reliably across symmetric NATs or other poorly-behaved legacy NATs.
Other possibilities are UPnP (Universal Plug and Play) or Bonjour (NAT-PMP), but these require the cooperation of the NAT device.
Most traditional client-server protocols (FTP being the main exception), however, do not send layer 3 contact information and therefore do not require any special treatment by NATs. In fact, avoiding NAT complications is practically a requirement when designing new higher-layer protocols today.
NATs can also cause problems where IPsec encryption is applied and in cases where multiple devices such as SIP phones are located behind a NAT. Phones which encrypt their signaling with IPsec encapsulate the port information within the IPsec packet meaning that NA(P)T devices cannot access and translate the port. In these cases the NA(P)T devices revert to simple NAT operation. This means that all traffic returning to the NAT will be mapped onto one client causing the service to fail. There are a couple of solutions to this problem, one is to use TLS which operates at level 4 in the OSI Reference Model and therefore does not mask the port number, or to Encapsulate the IPsec within UDP - the latter being the solution chosen by TISPAN to achieve secure NAT traversal.
The DNS protocol vulnerability announced by Dan Kaminsky on 2008 July 8 is indirectly affected by NAT port mapping. To avoid DNS server cache poisoning, it is highly desirable to not translate UDP source port numbers of outgoing DNS requests from any DNS server which is behind a firewall which implements NAT. The recommended work-around for the DNS vulnerability is to make all caching DNS servers use randomized UDP source ports. If the NAT function de-randomizes the UDP source ports, the DNS server will be made vulnerable.

NAT and TCP/UDP

"Pure NAT", operating on IP alone, may or may not correctly parse protocols that are totally concerned with IP information, such as ICMP, depending on whether the payload is interpreted by a host on the "inside" or "outside" of translation. As soon as the protocol stack is climbed, even with such basic protocols such TCP and UDP, the protocols will break unless NAT takes action beyond the network layer.
IP has a checksum in each packet header, which provides error detection only for the header. IP datagrams may become fragmented and it is necessary for a NAT to reassemble these fragments to allow correct recalculation of higher level checksums and correct tracking of which packets belong to which connection.
The major transport layer protocols, TCP and UDP, have a checksum that covers all the data they carry, as well as the TCP/UDP header, plus a "pseudo-header" that contains the source and destination IP addresses of the packet carrying the TCP/UDP header. For an originating NAT to successfully pass TCP or UDP, it must recompute the TCP/UDP header checksum based on the translated IP addresses, not the original ones, and put that checksum into the TCP/UDP header of the first packet of the fragmented set of packets. The receiving NAT must recompute the IP checksum on every packet it passes to the destination host, and also recognize and recompute the TCP/UDP header using the retranslated addresses and pseudo-header. This is not a completely solved problem. One solution is for the receiving NAT to reassemble the entire segment and then recompute a checksum calculated across all packets.
It may be wise for the originating host to do MTU Path Discovery (RFC 1191) to determine what MTU will go to the end without fragmentation, and then set the "don't fragment" bit in the appropriate packets. There is no totally general solution to this problem, which is why one of the goals of IPv6 is to avoid NAT.

Basic NAT and PAT

There are two levels of network address translation.
• Basic NAT. This involves IP address translation only, not port mapping.
• PAT (Port Address Translation). Also called simply "NAT" or "Network Address Port Translation, NAPT". This involves the translation of both IP addresses and port numbers.
All internet packets have a source IP address and a destination IP address. Both or either of the source and destination addresses may be translated.
Some internet packets do not have port numbers. For example, ICMP packets have no port numbers. However, the vast bulk of internet traffic is TCP and UDP packets, which do have port numbers. Packets which do have port numbers have both a source port number and a destination port number. Both or either of the source and destination ports may be translated.
NAT which involves translation of the source IP address and/or source port is called source NAT or SNAT. This re-writes the IP address and/or port number of the computer which originated the packet.
NAT which involves translation of the destination IP address and/or destination port number is called destination NAT or DNAT. This re-writes the IP address and/or port number corresponding to the destination computer.
SNAT and DNAT may be applied simultaneously to internet packets.
NOTE: 'PAT', as it is referred to here, is referred to by Cisco as NAT 'overloading', as described in this Howstuffworks article, provided to Howstuffworks by Cisco: http://computer.howstuffworks.com/nat3.htm

Network address translation

In computer networking, network address translation (NAT) is the process of modifying network address information in datagram packet headers while in transit across a traffic routing device for the purpose of remapping a given address space into another.
Most often today, NAT is used in conjunction with network masquerading (or IP masquerading) which is a technique that hides an entire address space, usually consisting of private network addresses (RFC 1918), behind a single IP address in another, often public address space. This mechanism is implemented in a routing device that uses stateful translation tables to map the "hidden" addresses into a single address and then rewrites the outgoing Internet Protocol (IP) packets on exit so that they appear to originate from the router. In the reverse communications path, responses are mapped back to the originating IP address using the rules ("state") stored in the translation tables. The translation table rules established in this fashion are flushed after a short period without new traffic refreshing their state.
As described, the method only allows transit traffic through the router when it is originating in the masqueraded network, since this establishes the translation tables. However, most NAT devices today allow the network administrator to configure translation tables entries for permanent use. This feature is often referred to as "static NAT" or port forwarding and allows traffic originating in the 'outside' network to reach designated hosts in the masqueraded network.
Because of the popularity of this technique, see below, the term NAT has become virtually synonymous with the method of IP masquerading.
Network address translation has serious consequences (see below, Drawbacks & Benefits) on the quality of Internet connectivity and requires careful attention to the details of its implementation. As a result, many methods have been devised to alleviate the issues encountered. See article on NAT traversal.

Overview

In the mid-1990's NAT became a popular tool for alleviating the IPv4 address shortage. NAT has proven particularly popular in countries that (for historical reasons) had fewer address space blocks allocated per capita.[citation needed] than, for example, the United States. It has become a standard, indispensable feature in routers for home and small-office Internet connections.
Most systems using NAT do so in order to enable multiple hosts on a private network to access the Internet using a single public IP address (see gateway). However, NAT breaks the originally envisioned model of IP end-to-end connectivity across the Internet and introduces complications in communication between hosts and has performance impacts.
NAT obscures an internal network's structure: all traffic appears to outside parties as if it originates from the gateway machine.
It has been argued that the wide adoption of IPv6 would make NAT unnecessary, as NAT is a method of handling the shortage of the IPv4 address space.[citation needed]
Network address translation involves re-writing the source and/or destination IP addresses and usually also the TCP/UDP port numbers of IP packets as they pass through the NAT. Checksums (both IP and TCP/UDP) must also be rewritten to take account of the changes.
In a typical configuration, a local network uses one of the designated "private" IP address subnets (the RFC 1918 Private Network Addresses are 192.168.x.x, 172.16.x.x through 172.31.x.x, and 10.x.x.x (or using CIDR notation, 192.168/16, 172.16/12, and 10/8), and a router on that network has a private address (such as 192.168.0.1) in that address space. The router is also connected to the Internet with a single "public" address (known as "overloaded" NAT) or multiple "public" addresses assigned by an ISP. As traffic passes from the local network to the Internet, the source address in each packet is translated on the fly from the private addresses to the public address(es). The router tracks basic data about each active connection (particularly the destination address and port). When a reply returns to the router, it uses the connection tracking data it stored during the outbound phase to determine where on the internal network to forward the reply; the TCP or UDP client port numbers are used to demultiplex the packets in the case of overloaded NAT, or IP address and port number when multiple public addresses are available, on packet return. To a system on the Internet, the router itself appears to be the source/destination for this traffic.

Monday, December 15, 2008

TCP/IP Overview and History Part 2

Modern TCP/IP Development and the Creation of TCP/IP Architecture
Testing and development of TCP continued for several years. In March 1977, version 2 of TCP was documented. In August 1977, a significant turning point came in TCP/IP’s development. Jon Postel, one of the most important pioneers of the Internet and TCP/IP, published a set of comments on the state of TCP. In that document (known as Internet Engineering Note number 2, or IEN 2), he provided what I consider superb evidence that reference models and layers aren't just for textbooks, and really are important to understand:
We are screwing up in our design of internet protocols by violating the principle of layering. Specifically we are trying to use TCP to do two things: serve as a host level end to end protocol, and to serve as an internet packaging and routing protocol. These two things should be provided in a layered and modular way. I suggest that a new distinct internetwork protocol is needed, and that TCP be used strictly as a host level end to end protocol.

-- Jon Postel, IEN 2, 1977
What Postel was essentially saying was that the version of TCP created in the mid-1970s was trying to do too much. Specifically, it was encompassing both layer three and layer four activities (in terms of OSI Reference Model layer numbers). His vision was prophetic, because we now know that having TCP handle all of these activities would have indeed led to problems down the road.
Postel's observation led to the creation of TCP/IP architecture, and the splitting of TCP into TCP at the transport layer and IP at the network layer; thus the name “TCP/IP”. (As an aside, it's interesting, given this history, that sometimes the entire TCP/IP suite is called just “IP”, even though TCP came first.) The process of dividing TCP into two portions began in version 3 of TCP, written in 1978. The first formal standard for the versions of IP and TCP used in modern networks (version 4) were created in 1980. This is why the first “real” version of IP is version 4 and not version 1. TCP/IP quickly became the standard protocol set for running the ARPAnet. In the 1980s, more and more machines and networks were connected to the evolving ARPAnet using TCP/IP protocols, and the TCP/IP Internet was born.

Important Factors in the Success of TCP/IP
TCP/IP was at one time just “one of many” different sets of protocols that could be used to provide network-layer and transport-layer functionality. Today there are still other options for internetworking protocol suites, but TCP/IP is the universally-accepted world-wide standard. Its growth in popularity has been due to a number of important factors. Some of these are historical, such as the fact that it is tied to the Internet as described above, while others are related to the characteristics of the protocol suite itself. Chief among these are the following:
o Integrated Addressing System: TCP/IP includes within it (as part of the Internet Protocol, primarily) a system for identifying and addressing devices on both small and large networks. The addressing system is designed to allow devices to be addressed regardless of the lower-level details of how each constituent network is constructed. Over time, the mechanisms for addressing in TCP/IP have improved, to meet the needs of growing networks, especially the Internet. The addressing system also includes a centralized administration capability for the Internet, to ensure that each device has a unique address.
o Design For Routing: Unlike some network-layer protocols, TCP/IP is specifically designed to facilitate the routing of information over a network of arbitrary complexity. In fact, TCP/IP is conceptually concerned more with the connection of networks, than with the connection of devices. TCP/IP routers enable data to be delivered between devices on different networks by moving it one step at a time from one network to the next. A number of support protocols are also included in TCP/IP to allow routers to exchange critical information and manage the efficient flow of information from one network to another.
o Underlying Network Independence: TCP/IP operates primarily at layers three and above, and includes provisions to allow it to function on almost any lower-layer technology, including LANs, wireless LANs and WANs of various sorts. This flexibility means that one can mix and match a variety of different underlying networks and connect them all using TCP/IP.
o Scalability: One of the most amazing characteristics of TCP/IP is how scalable its protocols have proven to be. Over the decades it has proven its mettle as the Internet has grown from a small network with just a few machines to a huge internetwork with millions of hosts. While some changes have been required periodically to support this growth, these changes have taken place as part of the TCP/IP development process, and the core of TCP/IP is basically the same as it was 25 years ago.
o Open Standards and Development Process: The TCP/IP standards are not proprietary, but open standards freely available to the public. Furthermore, the process used to develop TCP/IP standards is also completely open. TCP/IP standards and protocols are developed and modified using the unique, democratic “RFC” process, with all interested parties invited to participate. This ensures that anyone with an interest in the TCP/IP protocols is given a chance to provide input into their development, and also ensures the world-wide acceptance of the protocol suite.
o Universality: Everyone uses TCP/IP because everyone uses it!
This last point is, perhaps ironically, arguably the most important. Not only is TCP/IP the “underlying language of the Internet”, it is also used in most private networks today. Even former “competitors” to TCP/IP such as NetWare now use TCP/IP to carry traffic. The Internet continues to grow, and so do the capabilities and functions of TCP/IP. Preparation for the future continues, with the move to the new IP version 6 protocol in its early stages. It is likely that TCP/IP will remain a big part of internetworking for the foreseeable future.

TCP/IP Overview and History Part 1

The best place to start looking at TCP/IP is probably the name itself. TCP/IP in fact consists of dozens of different protocols, but only a few are the “main” protocols that define the core operation of the suite. Of these key protocols, two are usually considered the most important. The Internet Protocol (IP) is the primary OSI network layer (layer three) protocol that provides addressing, datagram routing and other functions in an internetwork. The Transmission Control Protocol (TCP) is the primary transport layer (layer four) protocol, and is responsible for connection establishment and management and reliable data transport between software processes on devices.
Due to the importance of these two protocols, their abbreviations have come to represent the entire suite: “TCP/IP”. (In a moment we'll discover exactly the history of that name.) IP and TCP are important because many of TCP/IP's most critical functions are implemented at layers three and four. However, there is much more to TCP/IP than just TCP and IP. The protocol suite as a whole requires the work of many different protocols and technologies to make a functional network that can properly provide users with the applications they need.
TCP/IP uses its own four-layer architecture that corresponds roughly to the OSI Reference Model and provides a framework for the various protocols that comprise the suite. It also includes numerous high-level applications, some of which are well-known by Internet users who may not realize they are part of TCP/IP, such as HTTP (which runs the World Wide Web) and FTP. In the topics on TCP/IP architecture and protocols I provide an overview of most of the important TCP/IP protocols and how they fit together.
Early TCP/IP History
As I said earlier, the Internet is a primary reason why TCP/IP is what it is today. In fact, the Internet and TCP/IP are so closely related in their history that it is difficult to discuss one without also talking about the other. They were developed together, with TCP/IP providing the mechanism for implementing the Internet. TCP/IP has over the years continued to evolve to meet the needs of the Internet and also smaller, private networks that use the technology. I will provide a brief summary of the history of TCP/IP here; of course, whole books have been written on TCP/IP and Internet history, and this is a technical Guide and not a history book, so remember that this is just a quick look for sake of interest.
The TCP/IP protocols were initially developed as part of the research network developed by the United States Defense Advanced Research Projects Agency (DARPA or ARPA). Initially, this fledgling network, called the ARPAnet, was designed to use a number of protocols that had been adapted from existing technologies. However, they all had flaws or limitations, either in concept or in practical matters such as capacity, when used on the ARPAnet. The developers of the new network recognized that trying to use these existing protocols might eventually lead to problems as the ARPAnet scaled to a larger size and was adapted for newer uses and applications.
In 1973, development of a full-fledged system of internetworking protocols for the ARPAnet began. What many people don't realize is that in early versions of this technology, there was only one core protocol: TCP. And in fact, these letters didn't even stand for what they do today; they were for the Transmission Control Program. The first version of this predecessor of modern TCP was written in 1973, then revised and formally documented in RFC 675, Specification of Internet Transmission Control Program, December 1974.

WWW - World Wide Web

Definition: The term WWW refers to the World Wide Web or simply the Web. The World Wide Web consists of all the public Web sites connected to the Internet worldwide, including the client devices (such as computers and cell phones) that access Web content. The WWW is just one of many applications of the Internet and computer networks.
The World Web is based on these technologies:
• HTML - Hypertext Markup Language
• HTTP - Hypertext Transfer Protocol
• Web servers and Web browsers
Researcher Tim Berners-Lee led the development of the original World Wide Web in the late 1980s and early 1990s. He helped build prototypes of the above Web technologies and coined the term WWW. Web sites and Web browsing exploded in popularity during the mid-1990s.
Also Known As: World Wide Web, The Web

Who Created the Internet Network?

Development of the technologies that became the Internet began decades ago. The development of the World Wide Web (WWW) portion of the Internet happened much later, although many people consider this synonymous with creating the Internet itself.
Answer: No single person or organization created the modern Internet, including Al Gore, Lyndon Johnson, or any other individual. Instead, multiple people developed the key technologies that later grew to become the Internet:
• Email - Long before the World Wide Web, email was the dominant communication method on the Internet. Ray Tomlinson developed in 1971 the first email system that worked over the early Internet.
• Ethernet - The physical communication technology underlying the Internet, Ethernet was created by Robert Metcalfe and David Boggs in 1973.
• TCP/IP - In May, 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper titled "A Protocol for Packet Network Interconnection." The paper's authors - Vinton Cerf and Robert Kahn - described a protocol called TCP that incorporated both connection-oriented and datagram services. This protocol later became known as TCP/IP.

An Illustrated History of Computers Part 4

The title of forefather of today's all-electronic digital computers is usually awarded to ENIAC, which stood for Electronic Numerical Integrator and Calculator. ENIAC was built at the University of Pennsylvania between 1943 and 1945 by two professors, John Mauchly and the 24 year old J. Presper Eckert, who got funding from the war department after promising they could build a machine that would replace all the "computers", meaning the women who were employed calculating the firing tables for the army's artillery guns. The day that Mauchly and Eckert saw the first small piece of ENIAC work, the persons they ran to bring to their lab to show off their progress were some of these female computers (one of whom remarked, "I was astounded that it took all this equipment to multiply 5 by 1000").

ENIAC filled a 20 by 40 foot room, weighed 30 tons, and used more than 18,000 vacuum tubes. Like the Mark I, ENIAC employed paper card readers obtained from IBM (these were a regular product for IBM, as they were a long established part of business accounting machines, IBM's forte). When operating, the ENIAC was silent but you knew it was on as the 18,000 vacuum tubes each generated waste heat like a light bulb and all this heat (174,000 watts of heat) meant that the computer could only be operated in a specially designed room with its own heavy duty air conditioning system. Only the left half of ENIAC is visible in the first picture, the right half was basically a mirror image of what's visible.

Two views of ENIAC: the "Electronic Numerical Integrator and Calculator" (note that it wasn't even given the name of computer since "computers" were people) [U.S. Army photo]

To reprogram the ENIAC you had to rearrange the patch cords that you can observe on the left in the prior photo, and the settings of 3000 switches that you can observe on the right. To program a modern computer, you type out a program with statements like:

    Circumference = 3.14 * diameter

To perform this computation on ENIAC you had to rearrange a large number of patch cords and then locate three particular knobs on that vast wall of knobs and set them to 3, 1, and 4.

Reprogramming ENIAC involved a hike [U.S. Army photo]

Once the army agreed to fund ENIAC, Mauchly and Eckert worked around the clock, seven days a week, hoping to complete the machine in time to contribute to the war. Their war-time effort was so intense that most days they ate all 3 meals in the company of the army Captain who was their liaison with their military sponsors. They were allowed a small staff but soon observed that they could hire only the most junior members of the University of Pennsylvania staff because the more experienced faculty members knew that their proposed machine would never work.

One of the most obvious problems was that the design would require 18,000 vacuum tubes to all work simultaneously. Vacuum tubes were so notoriously unreliable that even twenty years later many neighborhood drug stores provided a "tube tester" that allowed homeowners to bring in the vacuum tubes from their television sets and determine which one of the tubes was causing their TV to fail. And television sets only incorporated about 30 vacuum tubes. The device that used the largest number of vacuum tubes was an electronic organ: it incorporated 160 tubes. The idea that 18,000 tubes could function together was considered so unlikely that the dominant vacuum tube supplier of the day, RCA, refused to join the project (but did supply tubes in the interest of "wartime cooperation"). Eckert solved the tube reliability problem through extremely careful circuit design. He was so thorough that before he chose the type of wire cabling he would employ in ENIAC he first ran an experiment where he starved lab rats for a few days and then gave them samples of all the available types of cable to determine which they least liked to eat. Here's a look at a small number of the vacuum tubes in ENIAC:

Even with 18,000 vacuum tubes, ENIAC could only hold 20 numbers at a time. However, thanks to the elimination of moving parts it ran much faster than the Mark I: a multiplication that required 6 seconds on the Mark I could be performed on ENIAC in 2.8 thousandths of a second. ENIAC's basic clock speed was 100,000 cycles per second. Today's home computers employ clock speeds of 1,000,000,000 cycles per second. Built with $500,000 from the U.S. Army, ENIAC's first task was to compute whether or not it was possible to build a hydrogen bomb (the atomic bomb was completed during the war and hence is older than ENIAC). The very first problem run on ENIAC required only 20 seconds and was checked against an answer obtained after forty hours of work with a mechanical calculator. After chewing on half a million punch cards for six weeks, ENIAC did humanity no favor when it declared the hydrogen bomb feasible. This first ENIAC program remains classified even today.

Once ENIAC was finished and proved worthy of the cost of its development, its designers set about to eliminate the obnoxious fact that reprogramming the computer required a physical modification of all the patch cords and switches. It took days to change ENIAC's program. Eckert and Mauchly's next teamed up with the mathematician John von Neumann to design EDVAC, which pioneered the stored program. Because he was the first to publish a description of this new computer, von Neumann is often wrongly credited with the realization that the program (that is, the sequence of computation steps) could be represented electronically just as the data was. But this major breakthrough can be found in Eckert's notes long before he ever started working with von Neumann. Eckert was no slouch: while in high school Eckert had scored the second highest math SAT score in the entire country.

After ENIAC and EDVAC came other computers with humorous names such as ILLIAC, JOHNNIAC, and, of course, MANIAC. ILLIAC was built at the University of Illinois at Champaign-Urbana, which is probably why the science fiction author Arthur C. Clarke chose to have the HAL computer of his famous book "2001: A Space Odyssey" born at Champaign-Urbana. Have you ever noticed that you can shift each of the letters of IBM backward by one alphabet position and get HAL?

ILLIAC II built at the University of Illinois (it is a good thing computers were one-of-a-kind creations in these days, can you imagine being asked to duplicate this?)

HAL from the movie "2001: A Space Odyssey". Look at the previous picture to understand why the movie makers in 1968 assumed computers of the future would be things you walk into.

JOHNNIAC was a reference to John von Neumann, who was unquestionably a genius. At age 6 he could tell jokes in classical Greek. By 8 he was doing calculus. He could recite books he had read years earlier word for word. He could read a page of the phone directory and then recite it backwards. On one occasion it took von Neumann only 6 minutes to solve a problem in his head that another professor had spent hours on using a mechanical calculator. Von Neumann is perhaps most famous (infamous?) as the man who worked out the complicated method needed to detonate an atomic bomb.

Once the computer's program was represented electronically, modifications to that program could happen as fast as the computer could compute. In fact, computer programs could now modify themselves while they ran (such programs are called self-modifying programs). This introduced a new way for a program to fail: faulty logic in the program could cause it to damage itself. This is one source of the general protection fault famous in MS-DOS and the blue screen of death famous in Windows.

Today, one of the most notable characteristics of a computer is the fact that its ability to be reprogrammed allows it to contribute to a wide variety of endeavors, such as the following completely unrelated fields:

  • the creation of special effects for movies,
  • the compression of music to allow more minutes of music to fit within the limited memory of an MP3 player,
  • the observation of car tire rotation to detect and prevent skids in an anti-lock braking system (ABS),
  • the analysis of the writing style in Shakespeare's work with the goal of proving whether a single individual really was responsible for all these pieces.

By the end of the 1950's computers were no longer one-of-a-kind hand built devices owned only by universities and government research labs. Eckert and Mauchly left the University of Pennsylvania over a dispute about who owned the patents for their invention. They decided to set up their own company. Their first product was the famous UNIVAC computer, the first commercial (that is, mass produced) computer. In the 50's, UNIVAC (a contraction of "Universal Automatic Computer") was the household word for "computer" just as "Kleenex" is for "tissue". The first UNIVAC was sold, appropriately enough, to the Census bureau. UNIVAC was also the first computer to employ magnetic tape. Many people still confuse a picture of a reel-to-reel tape recorder with a picture of a mainframe computer.

A reel-to-reel tape drive [photo courtesy of The Computer Museum]

ENIAC was unquestionably the origin of the U.S. commercial computer industry, but its inventors, Mauchly and Eckert, never achieved fortune from their work and their company fell into financial problems and was sold at a loss. By 1955 IBM was selling more computers than UNIVAC and by the 1960's the group of eight companies selling computers was known as "IBM and the seven dwarfs". IBM grew so dominant that the federal government pursued anti-trust proceedings against them from 1969 to 1982 (notice the pace of our country's legal system). You might wonder what type of event is required to dislodge an industry heavyweight. In IBM's case it was their own decision to hire an unknown but aggressive firm called Microsoft to provide the software for their personal computer (PC). This lucrative contract allowed Microsoft to grow so dominant that by the year 2000 their market capitalization (the total value of their stock) was twice that of IBM and they were convicted in Federal Court of running an illegal monopoly.

If you learned computer programming in the 1970's, you dealt with what today are called mainframe computers, such as the IBM 7090 (shown below), IBM 360, or IBM 370.

The IBM 7094, a typical mainframe computer [photo courtesy of IBM]

There were 2 ways to interact with a mainframe. The first was called time sharing because the computer gave each user a tiny sliver of time in a round-robin fashion. Perhaps 100 users would be simultaneously logged on, each typing on a teletype such as the following:

The Teletype was the standard mechanism used to interact with a time-sharing computer

A teletype was a motorized typewriter that could transmit your keystrokes to the mainframe and then print the computer's response on its roll of paper. You typed a single line of text, hit the carriage return button, and waited for the teletype to begin noisily printing the computer's response (at a whopping 10 characters per second). On the left-hand side of the teletype in the prior picture you can observe a paper tape reader and writer (i.e., puncher). Here's a close-up of paper tape:

Three views of paper tape

After observing the holes in paper tape it is perhaps obvious why all computers use binary numbers to represent data: a binary bit (that is, one digit of a binary number) can only have the value of 0 or 1 (just as a decimal digit can only have the value of 0 thru 9). Something which can only take two states is very easy to manufacture, control, and sense. In the case of paper tape, the hole has either been punched or it has not. Electro-mechanical computers such as the Mark I used relays to represent data because a relay (which is just a motor driven switch) can only be open or closed. The earliest all-electronic computers used vacuum tubes as switches: they too were either open or closed. Transistors replaced vacuum tubes because they too could act as switches but were smaller, cheaper, and consumed less power.

Paper tape has a long history as well. It was first used as an information storage medium by Sir Charles Wheatstone, who used it to store Morse code that was arriving via the newly invented telegraph (incidentally, Wheatstone was also the inventor of the accordion).

The alternative to time sharing was batch mode processing, where the computer gives its full attention to your program. In exchange for getting the computer's full attention at run-time, you had to agree to prepare your program off-line on a key punch machine which generated punch cards.

An IBM Key Punch machine which operates like a typewriter except it produces punched cards rather than a printed sheet of paper

University students in the 1970's bought blank cards a linear foot at a time from the university bookstore. Each card could hold only 1 program statement. To submit your program to the mainframe, you placed your stack of cards in the hopper of a card reader. Your program would be run whenever the computer made it that far. You often submitted your deck and then went to dinner or to bed and came back later hoping to see a successful printout showing your results. Obviously, a program run in batch mode could not be interactive.

But things changed fast. By the 1990's a university student would typically own his own computer and have exclusive use of it in his dorm room.

The original IBM Personal Computer (PC)

This transformation was a result of the invention of the microprocessor. A microprocessor (uP) is a computer that is fabricated on an integrated circuit (IC). Computers had been around for 20 years before the first microprocessor was developed at Intel in 1971. The micro in the name microprocessor refers to the physical size. Intel didn't invent the electronic computer. But they were the first to succeed in cramming an entire computer on a single chip (IC). Intel was started in 1968 and initially produced only semiconductor memory (Intel invented both the DRAM and the EPROM, two memory technologies that are still going strong today). In 1969 they were approached by Busicom, a Japanese manufacturer of high performance calculators (these were typewriter sized units, the first shirt-pocket sized scientific calculator was the Hewlett-Packard HP35 introduced in 1972). Busicom wanted Intel to produce 12 custom calculator chips: one chip dedicated to the keyboard, another chip dedicated to the display, another for the printer, etc. But integrated circuits were (and are) expensive to design and this approach would have required Busicom to bear the full expense of developing 12 new chips since these 12 chips would only be of use to them.

A typical Busicom desk calculator

But a new Intel employee (Ted Hoff) convinced Busicom to instead accept a general purpose computer chip which, like all computers, could be reprogrammed for many different tasks (like controlling a keyboard, a display, a printer, etc.). Intel argued that since the chip could be reprogrammed for alternative purposes, the cost of developing it could be spread out over more users and hence would be less expensive to each user. The general purpose computer is adapted to each new purpose by writing a program which is a sequence of instructions stored in memory (which happened to be Intel's forte). Busicom agreed to pay Intel to design a general purpose chip and to get a price break since it would allow Intel to sell the resulting chip to others. But development of the chip took longer than expected and Busicom pulled out of the project. Intel knew it had a winner by that point and gladly refunded all of Busicom's investment just to gain sole rights to the device which they finished on their own.

Thus became the Intel 4004, the first microprocessor (uP). The 4004 consisted of 2300 transistors and was clocked at 108 kHz (i.e., 108,000 times per second). Compare this to the 42 million transistors and the 2 GHz clock rate (i.e., 2,000,000,000 times per second) used in a Pentium 4. One of Intel's 4004 chips still functions aboard the Pioneer 10 spacecraft, which is now the man-made object farthest from the earth. Curiously, Busicom went bankrupt and never ended up using the ground-breaking microprocessor.

Intel followed the 4004 with the 8008 and 8080. Intel priced the 8080 microprocessor at $360 dollars as an insult to IBM's famous 360 mainframe which cost millions of dollars. The 8080 was employed in the MITS Altair computer, which was the world's first personal computer (PC). It was personal all right: you had to build it yourself from a kit of parts that arrived in the mail. This kit didn't even include an enclosure and that is the reason the unit shown below doesn't match the picture on the magazine cover.

The Altair 8800, the first PC

A Harvard freshman by the name of Bill Gates decided to drop out of college so he could concentrate all his time writing programs for this computer. This early experienced put Bill Gates in the right place at the right time once IBM decided to standardize on the Intel microprocessors for their line of PCs in 1981. The Intel Pentium 4 used in today's PCs is still compatible with the Intel 8088 used in IBM's first PC.

If you've enjoyed this history of computers, I encourage you to try your own hand at programming a computer. That is the only way you will really come to understand the concepts of looping, subroutines, high and low-level languages, bits and bytes, etc. I have written a number of Windows programs which teach computer programming in a fun, visually-engaging setting. I start my students on a programmable RPN calculator where we learn about programs, statements, program and data memory, subroutines, logic and syntax errors, stacks, etc. Then we move on to an 8051 microprocessor (which happens to be the most widespread microprocessor on earth) where we learn about microprocessors, bits and bytes, assembly language, addressing modes, etc. Finally, we graduate to the most powerful language in use today: C++ (pronounced "C plus plus"). These Windows programs are accompanied by a book's worth of on-line documentation which serves as a self-study guide, allowing you to teach yourself computer programming! The home page (URL) for this collection of software is www.computersciencelab.com.


Bibliography:

"ENIAC: The Triumphs and Tragedies of the World's First Computer" by Scott McCartney.

An Illustrated History of Computers Part 3

IBM continued to develop mechanical calculators for sale to businesses to help with financial accounting and inventory accounting. One characteristic of both financial accounting and inventory accounting is that although you need to subtract, you don't need negative numbers and you really don't have to multiply since multiplication can be accomplished via repeated addition.

But the U.S. military desired a mechanical calculator more optimized for scientific computation. By World War II the U.S. had battleships that could lob shells weighing as much as a small car over distances up to 25 miles. Physicists could write the equations that described how atmospheric drag, wind, gravity, muzzle velocity, etc. would determine the trajectory of the shell. But solving such equations was extremely laborious. This was the work performed by the human computers. Their results would be published in ballistic "firing tables" published in gunnery manuals. During World War II the U.S. military scoured the country looking for (generally female) math majors to hire for the job of computing these tables. But not enough humans could be found to keep up with the need for new tables. Sometimes artillery pieces had to be delivered to the battlefield without the necessary firing tables and this meant they were close to useless because they couldn't be aimed properly. Faced with this situation, the U.S. military was willing to invest in even hair-brained schemes to automate this type of computation.

One early success was the Harvard Mark I computer which was built as a partnership between Harvard and IBM in 1944. This was the first programmable digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years, sounding like a roomful of ladies knitting. To appreciate the scale of this machine note the four typewriters in the foreground of the following photo.

The Harvard Mark I: an electro-mechanical computer

You can see the 50 ft rotating shaft in the bottom of the prior photo. This shaft was a central power source for the entire machine. This design feature was reminiscent of the days when waterpower was used to run a machine shop and each lathe or other tool was driven by a belt connected to a single overhead shaft which was turned by an outside waterwheel.

A central shaft driven by an outside waterwheel and connected to each machine by overhead belts was the customary power source for all the machines in a factory

Here's a close-up of one of the Mark I's four paper tape readers. A paper tape was an improvement over a box of punched cards as anyone who has ever dropped -- and thus shuffled -- his "stack" knows.

One of the four paper tape readers on the Harvard Mark I (you can observe the punched paper roll emerging from the bottom)

One of the primary programmers for the Mark I was a woman, Grace Hopper. Hopper found the first computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at least 1889 but Hopper is credited with coining the word "debugging" to describe the work to eliminate program faults.

The first computer bug [photo © 2002 IEEE]

In 1953 Grace Hopper invented the first high-level language, "Flow-matic". This language eventually became COBOL which was the language most affected by the infamous Y2K problem. A high-level language is designed to be more understandable by humans than is the binary language understood by the computing machinery. A high-level language is worthless without a program -- known as a compiler -- to translate it into the binary language of the computer and hence Grace Hopper also constructed the world's first compiler. Grace remained active as a Rear Admiral in the Navy Reserves until she was 79 (another record).

The Mark I operated on numbers that were 23 digits wide. It could add or subtract two of these numbers in three-tenths of a second, multiply them in four seconds, and divide them in ten seconds. Forty-five years later computers could perform an addition in a billionth of a second! Even though the Mark I had three quarters of a million components, it could only store 72 numbers! Today, home computers can store 30 million numbers in RAM and another 10 billion numbers on their hard disk. Today, a number can be pulled from RAM after a delay of only a few billionths of a second, and from a hard disk after a delay of only a few thousandths of a second. This kind of speed is obviously impossible for a machine which must move a rotating shaft and that is why electronic computers killed off their mechanical predecessors.

On a humorous note, the principal designer of the Mark I, Howard Aiken of Harvard, estimated in 1947 that six electronic digital computers would be sufficient to satisfy the computing needs of the entire United States. IBM had commissioned this study to determine whether it should bother developing this new invention into one of its standard products (up until then computers were one-of-a-kind items built by special arrangement). Aiken's prediction wasn't actually so bad as there were very few institutions (principally, the government and military) that could afford the cost of what was called a computer in 1947. He just didn't foresee the micro-electronics revolution which would allow something like an IBM Stretch computer of 1959:

(that's just the operator's console, here's the rest of its 33 foot length:)

to be bested by a home computer of 1976 such as this Apple I which sold for only $600:

The Apple 1 which was sold as a do-it-yourself kit (without the lovely case seen here)

Computers had been incredibly expensive because they required so much hand assembly, such as the wiring seen in this CDC 7600:

Typical wiring in an early mainframe computer [photo courtesy The Computer Museum]

The microelectronics revolution is what allowed the amount of hand-crafted wiring seen in the prior photo to be mass-produced as an integrated circuit which is a small sliver of silicon the size of your thumbnail .

An integrated circuit ("silicon chip") [photo courtesy of IBM]

The primary advantage of an integrated circuit is not that the transistors (switches) are miniscule (that's the secondary advantage), but rather that millions of transistors can be created and interconnected in a mass-production process. All the elements on the integrated circuit are fabricated simultaneously via a small number (maybe 12) of optical masks that define the geometry of each layer. This speeds up the process of fabricating the computer -- and hence reduces its cost -- just as Gutenberg's printing press sped up the fabrication of books and thereby made them affordable to all.

The IBM Stretch computer of 1959 needed its 33 foot length to hold the 150,000 transistors it contained. These transistors were tremendously smaller than the vacuum tubes they replaced, but they were still individual elements requiring individual assembly. By the early 1980s this many transistors could be simultaneously fabricated on an integrated circuit. Today's Pentium 4 microprocessor contains 42,000,000 transistors in this same thumbnail sized piece of silicon.

It's humorous to remember that in between the Stretch machine (which would be called a mainframe today) and the Apple I (a desktop computer) there was an entire industry segment referred to as mini-computers such as the following PDP-12 computer of 1969:

The DEC PDP-12

Sure looks "mini", huh? But we're getting ahead of our story.

One of the earliest attempts to build an all-electronic (that is, no gears, cams, belts, shafts, etc.) digital computer occurred in 1937 by J. V. Atanasoff, a professor of physics and mathematics at Iowa State University. By 1941 he and his graduate student, Clifford Berry, had succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. This machine was the first to store data as a charge on a capacitor, which is how today's computers store information in their main memory (DRAM or dynamic RAM). As far as its inventors were aware, it was also the first to employ binary arithmetic. However, the machine was not programmable, it lacked a conditional branch, its design was appropriate for only one type of mathematical problem, and it was not further pursued after World War II. It's inventors didn't even bother to preserve the machine and it was dismantled by those who moved into the room where it lay abandoned.

The Atanasoff-Berry Computer

Another candidate for granddaddy of the modern computer was Colossus, built during World War II by Britain for the purpose of breaking the cryptographic codes used by Germany. Britain led the world in designing and building electronic machines dedicated to code breaking, and was routinely able to read coded Germany radio transmissions. But Colossus was definitely not a general purpose, reprogrammable machine. Note the presence of pulleys in the two photos of Colossus below:

Two views of the code-breaking Colossus of Great Britain

The Harvard Mark I, the Atanasoff-Berry computer, and the British Colossus all made important contributions. American and British computer pioneers were still arguing over who was first to do what, when in 1965 the work of the German Konrad Zuse was published for the first time in English. Scooped! Zuse had built a sequence of general purpose computers in Nazi Germany. The first, the Z1, was built between 1936 and 1938 in the parlor of his parent's home.

The Zuse Z1 in its residential setting

Zuse's third machine, the Z3, built in 1941, was probably the first operational, general-purpose, programmable (that is, software controlled) digital computer. Without knowledge of any calculating machine inventors since Leibniz (who lived in the 1600's), Zuse reinvented Babbage's concept of programming and decided on his own to employ binary representation for numbers (Babbage had advocated decimal). The Z3 was destroyed by an Allied bombing raid. The Z1 and Z2 met the same fate and the Z4 survived only because Zuse hauled it in a wagon up into the mountains. Zuse's accomplishments are all the more incredible given the context of the material and manpower shortages in Germany during World War II. Zuse couldn't even obtain paper tape so he had to make his own by punching holes in discarded movie film. Because these machines were unknown outside Germany, they did not influence the path of computing in America. But their architecture is identical to that still in use today: an arithmetic unit to do the calculations, a memory for storing numbers, a control system to supervise operations, and input and output devices to connect to the external world. Zuse also invented what might be the first high-level computer language, "Plankalkul", though it too was unknown outside Germany.

An Illustrated History of Computers Part 2

Just a few years after Pascal, the German Gottfried Wilhelm Leibniz (co-inventor with Newton of calculus) managed to build a four-function (addition, subtraction, multiplication, and division) calculator that he called the stepped reckoner because, instead of gears, it employed fluted drums having ten flutes arranged around their circumference in a stair-step fashion. Although the stepped reckoner employed the decimal number system (each drum had 10 flutes), Leibniz was the first to advocate use of the binary number system which is fundamental to the operation of modern computers. Leibniz is considered one of the greatest of the philosophers but he died poor and alone.

Leibniz's Stepped Reckoner (have you ever heard "calculating" referred to as "reckoning"?)

In 1801 the Frenchman Joseph Marie Jacquard invented a power loom that could base its weave (and hence the design on the fabric) upon a pattern automatically read from punched wooden cards, held together in a long row by rope. Descendents of these punched cards have been in use ever since (remember the "hanging chad" from the Florida presidential ballots of the year 2000?).

Jacquard's Loom showing the threads and the punched cards

By selecting particular cards for Jacquard's loom you defined the woven pattern [photo © 2002 IEEE]

A close-up of a Jacquard card

This tapestry was woven by a Jacquard loom

Jacquard's technology was a real boon to mill owners, but put many loom operators out of work. Angry mobs smashed Jacquard looms and once attacked Jacquard himself. History is full of examples of labor unrest following technological innovation yet most studies show that, overall, technology has actually increased the number of jobs.

By 1822 the English mathematician Charles Babbage was proposing a steam driven calculating machine the size of a room, which he called the Difference Engine. This machine would be able to compute tables of numbers, such as logarithm tables. He obtained government funding for this project due to the importance of numeric tables in ocean navigation. By promoting their commercial and military navies, the British government had managed to become the earth's greatest empire. But in that time frame the British government was publishing a seven volume set of navigation tables which came with a companion volume of corrections which showed that the set had over 1000 numerical errors. It was hoped that Babbage's machine could eliminate errors in these types of tables. But construction of Babbage's Difference Engine proved exceedingly difficult and the project soon became the most expensive government funded project up to that point in English history. Ten years later the device was still nowhere near complete, acrimony abounded between all involved, and funding dried up. The device was never finished.

A small section of the type of mechanism employed in Babbage's Difference Engine [photo © 2002 IEEE]

Babbage was not deterred, and by then was on to his next brainstorm, which he called the Analytic Engine. This device, large as a house and powered by 6 steam engines, would be more general purpose in nature because it would be programmable, thanks to the punched card technology of Jacquard. But it was Babbage who made an important intellectual leap regarding the punched cards. In the Jacquard loom, the presence or absence of each hole in the card physically allows a colored thread to pass or stops that thread (you can see this clearly in the earlier photo). Babbage saw that the pattern of holes could be used to represent an abstract idea such as a problem statement or the raw data required for that problem's solution. Babbage saw that there was no requirement that the problem matter itself physically pass thru the holes.

Furthermore, Babbage realized that punched paper could be employed as a storage mechanism, holding computed numbers for future reference. Because of the connection to the Jacquard loom, Babbage called the two main parts of his Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving industry. The Store was where numbers were held and the Mill was where they were "woven" into new results. In a modern computer these same parts are called the memory unit and the central processing unit (CPU).

The Analytic Engine also had a key function that distinguishes computers from calculators: the conditional statement. A conditional statement allows a program to achieve different results each time it is run. Based on the conditional statement, the path of the program (that is, what statements are executed next) can be determined based upon a condition or situation that is detected at the very moment the program is running.

You have probably observed that a modern stoplight at an intersection between a busy street and a less busy street will leave the green light on the busy street until a car approaches on the less busy street. This type of street light is controlled by a computer program that can sense the approach of cars on the less busy street. That moment when the light changes from green to red is not fixed in the program but rather varies with each traffic situation. The conditional statement in the stoplight program would be something like, "if a car approaches on the less busy street and the more busy street has already enjoyed the green light for at least a minute then move the green light to the less busy street". The conditional statement also allows a program to react to the results of its own calculations. An example would be the program that the I.R.S uses to detect tax fraud. This program first computes a person's tax liability and then decides whether to alert the police based upon how that person's tax payments compare to his obligations.

Babbage befriended Ada Byron, the daughter of the famous poet Lord Byron (Ada would later become the Countess Lady Lovelace by marriage). Though she was only 19, she was fascinated by Babbage's ideas and thru letters and meetings with Babbage she learned enough about the design of the Analytic Engine to begin fashioning programs for the still unbuilt machine. While Babbage refused to publish his knowledge for another 30 years, Ada wrote a series of "Notes" wherein she detailed sequences of instructions she had prepared for the Analytic Engine. The Analytic Engine remained unbuilt (the British government refused to get involved with this one) but Ada earned her spot in history as the first computer programmer. Ada invented the subroutine and was the first to recognize the importance of looping. Babbage himself went on to invent the modern postal system, cowcatchers on trains, and the ophthalmoscope, which is still used today to treat the eye.

The next breakthrough occurred in America. The U.S. Constitution states that a census should be taken of all U.S. citizens every 10 years in order to determine the representation of the states in Congress. While the very first census of 1790 had only required 9 months, by 1880 the U.S. population had grown so much that the count for the 1880 census took 7.5 years. Automation was clearly needed for the next census. The census bureau offered a prize for an inventor to help with the 1890 census and this prize was won by Herman Hollerith, who proposed and then successfully adopted Jacquard's punched cards for the purpose of computation.

Hollerith's invention, known as the Hollerith desk, consisted of a card reader which sensed the holes in the cards, a gear driven mechanism which could count (using Pascal's mechanism which we still see in car odometers), and a large wall of dial indicators (a car speedometer is a dial indicator) to display the results of the count.

An operator working at a Hollerith Desk like the one below

Preparation of punched cards for the U.S. census

A few Hollerith desks still exist today [photo courtesy The Computer Museum]

The patterns on Jacquard's cards were determined when a tapestry was designed and then were not changed. Today, we would call this a read-only form of information storage. Hollerith had the insight to convert punched cards to what is today called a read/write technology. While riding a train, he observed that the conductor didn't merely punch each ticket, but rather punched a particular pattern of holes whose positions indicated the approximate height, weight, eye color, etc. of the ticket owner. This was done to keep anyone else from picking up a discarded ticket and claiming it was his own (a train ticket did not lose all value when it was punched because the same ticket was used for each leg of a trip). Hollerith realized how useful it would be to punch (write) new cards based upon an analysis (reading) of some other set of cards. Complicated analyses, too involved to be accomplished during a single pass thru the cards, could be accomplished via multiple passes thru the cards using newly printed cards to remember the intermediate results. Unknown to Hollerith, Babbage had proposed this long before.

Hollerith's technique was successful and the 1890 census was completed in only 3 years at a savings of 5 million dollars. Interesting aside: the reason that a person who removes inappropriate content from a book or movie is called a censor, as is a person who conducts a census, is that in Roman society the public official called the "censor" had both of these jobs.

Hollerith built a company, the Tabulating Machine Company which, after a few buyouts, eventually became International Business Machines, known today as IBM. IBM grew rapidly and punched cards became ubiquitous. Your gas bill would arrive each month with a punch card you had to return with your payment. This punch card recorded the particulars of your account: your name, address, gas usage, etc. (I imagine there were some "hackers" in these days who would alter the punch cards to change their bill). As another example, when you entered a toll way (a highway that collects a fee from each driver) you were given a punch card that recorded where you started and then when you exited from the toll way your fee was computed based upon the miles you drove. When you voted in an election the ballot you were handed was a punch card. The little pieces of paper that are punched out of the card are called "chad" and were thrown as confetti at weddings. Until recently all Social Security and other checks issued by the Federal government were actually punch cards. The check-out slip inside a library book was a punch card. Written on all these cards was a phrase as common as "close cover before striking": "do not fold, spindle, or mutilate". A spindle was an upright spike on the desk of an accounting clerk. As he completed processing each receipt he would impale it on this spike. When the spindle was full, he'd run a piece of string through the holes, tie up the bundle, and ship it off to the archives. You occasionally still see spindles at restaurant cash registers.

Two types of computer punch cards

Incidentally, the Hollerith census machine was the first machine to ever be featured on a magazine cover.

An Illustrated History of Computers Part 1

The first computers were people! That is, electronic computers (and the earlier mechanical computers) were given this name because they performed the work that had previously been assigned to people. "Computer" was originally a job title: it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs. Imagine you had a job where hour after hour, day after day, you were to do nothing but compute multiplications. Boredom would quickly set in, leading to carelessness, leading to mistakes. And even on your best days you wouldn't be producing answers very fast. Therefore, inventors have been searching for hundreds of years for a way to mechanize (that is, find a mechanism that can perform) this task.

This picture shows what were known as "counting tables" [photo courtesy IBM]

A typical computer operation back when computers were people.

The abacus was an early aid for mathematical computations. Its only value is that it aids the memory of the human performing the calculation. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower). The abacus is often wrongly attributed to China. In fact, the oldest surviving abacus was used in 300 B.C. by the Babylonians. The abacus is still in use today, principally in the far east. A modern abacus consists of rings that slide over rods, but the older one pictured below dates from the time when pebbles were used for counting (the word "calculus" comes from the Latin word for pebble).




A very old abacus

A more modern abacus. Note how the abacus is really just a representation of the human fingers: the 5 lower rings on each rod represent the 5 fingers and the 2 upper rings represent the 2 hands.

In 1617 an eccentric (some say mad) Scotsman named John Napier invented logarithms, which are a technology that allows multiplication to be performed via addition. The magic ingredient is the logarithm of each operand, which was originally obtained from a printed table. But Napier also invented an alternative to tables, where the logarithm values were carved on ivory sticks which are now called Napier's Bones.

An original set of Napier's Bones [photo courtesy IBM]

A more modern set of Napier's Bones

Napier's invention led directly to the slide rule, first built in England in 1632 and still in use in the 1960's by the NASA engineers of the Mercury, Gemini, and Apollo programs which landed men on the moon.

A slide rule

Leonardo da Vinci (1452-1519) made drawings of gear-driven calculating machines but apparently never built any.

A Leonardo da Vinci drawing showing gears arranged for computing

The first gear-driven calculating machine to actually be built was probably the calculating clock, so named by its inventor, the German professor Wilhelm Schickard in 1623. This device got little publicity because Schickard died soon afterward in the bubonic plague.

Schickard's Calculating Clock

In 1642 Blaise Pascal, at age 19, invented the Pascaline as an aid for his father who was a tax collector. Pascal built 50 of this gear-driven one-function calculator (it could only add) but couldn't sell many because of their exorbitant cost and because they really weren't that accurate (at that time it was not possible to fabricate gears with the required precision). Up until the present age when car dashboards went digital, the odometer portion of a car's speedometer used the very same mechanism as the Pascaline to increment the next wheel after each full revolution of the prior wheel. Pascal was a child prodigy. At the age of 12, he was discovered doing his version of Euclid's thirty-second proposition on the kitchen floor. Pascal went on to invent probability theory, the hydraulic press, and the syringe. Shown below is an 8 digit version of the Pascaline, and two views of a 6 digit version:

Pascal's Pascaline [photo © 2002 IEEE]

A 6 digit model for those who couldn't afford the 8 digit model

A Pascaline opened up so you can observe the gears and cylinders which rotated to display the numerical result

Pilih Siaran radio anda

 

 

klik "STOP" untuk hentikan siaran radio. Semoga terhibur.

4635049