BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Rise And Rise Of IBM, Technology Hype And Fascination With Artificial Intelligence (AI)

Following
This article is more than 6 years old.

A number of this week’s milestones in the history of technology link the rise of IBM, the introduction of the ENIAC, and the renewed fascination with so-called artificial intelligence.

On February 14, 1924, the Computing-Tabulating-Recording Company (CTR) changed its name to International Business Machines Corporation (IBM). “IBM” was first used for CTR’s subsidiaries in Canada and South America, but after “several years of persuading a slow-moving board of directors,” Thomas and Marva Belden note in The Lengthening Shadow, Thomas J. Watson Sr. succeeded in applying it to the entire company: “International to represent its big aspirations and Business Machines to evade the confines of the office appliance industry.”

As Kevin Maney observes in The Maverick and His Machine, IBM “was still an upstart little company” in 1924, when “revenues climbed to $11 million – not quite back to 1920 levels. (An $11 million company in 1924 was equivalent to a company in 2001 with revenues of $113 million…).”

The upstart, according to Watson, was going to live forever. From a talk he gave at the first meeting of IBM’s Quarter Century Club (employees who have served the company for 25 years), on June 21, 1924:

The opportunities of the future are bound to be greater than those of the past, and to the possibilities of this business there is no limit so long as it holds the loyal cooperation of men and women like yourselves.

And in January 1926, at the One Hundred Percent Club Convention:

This business has a future for your sons and grandsons and your great-grandsons, because it is going on forever.  Nothing in the world will ever stop it. The IBM is not merely an organization of men; it is an institution that will go on forever.

On February 14, 1946, the New York Times announced the unveiling of "an amazing machine that applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for a solution... Leaders who saw the device in action for the first time," the report continued "heralded it as a tool with which to begin to rebuild scientific affairs on new foundations."

With those words, the Electronic Numerical Integrator and Computer (ENIAC), the world´s first large-scale electronic general-purpose digital computer, developed at The Moore School of Electrical Engineering at the University of Pennsylvania in Philadelphia, emerged from the wraps of secrecy under which it had been constructed in the last years of World War II.

“The ENIAC weighed 30 tons, covered 1,500 square feet of floor space, used over 17,000 vacuum tubes (five times more than any previous device), 70,000 resistors,10,000 capacitors, 1,500 relays, and 6,000 manual switches, consumed 174,000 watts of power, and cost about $500,000,” says C. Dianne Martin in her 1995 article “ENIAC: The Press Conference That Shook the World.” After reviewing the press reports about the public unveiling of the ENIAC, she concludes:

Like many other examples of scientific discovery during the last 50 years, the press consistently used exciting imagery and metaphors to describe the early computers. The science journalists covered the development of computers as a series of dramatic events rather than as an incremental process of research and testing. Readers were given hyperbole designed to raise their expectations about the use of the new electronic brains to solve many different kinds of problems.

This engendered premature enthusiasm, which then led to disillusionment and even distrust of computers on the part of the public when the new technology did not live up to these expectations.

The premature enthusiasm and exciting imagery was not confined only to ENIAC or the press. In the same vein, and the same year (1946), Waldemar Kaempffert reported in The New York Times:

Crude in comparison with brains as the computing machines may be that solve in a few seconds mathematical problems that would ordinarily take hours, they behave as if they had a will of their own. In fact, the talk at the meeting of the American Institute of Electrical Engineers was half electronics, half physiology. One scientist excused the absence of a colleague, the inventor of a new robot, with the explanation that ‘he couldn’t bear to leave the machine at home alone’ just as if it were a baby.

On February 15, 1946, The Electronic Numerical Integrator And Computer (ENIAC) was formally dedicated at the University of Pennsylvania. Thomas Haigh, Mark Priestley and Crispin Rope in ENIAC in Action: Making and Remaking the Modern Computer:

ENIAC established the feasibility of high-speed electronic computing, demonstrating that a machine containing many thousands of unreliable vacuum tubes could nevertheless be coaxed into uninterrupted operation for long enough to do something useful.

During an operational life of almost a decade ENIAC did a great deal more than merely inspire the next wave of computer builders. Until 1950 it was the only fully electronic computer working in the United States, and it was irresistible to many governmental and corporate users whose mathematical problems required a formerly infeasible amount of computational work. By October of 1955, when ENIAC was decommissioned, scores of people had learned to program and operate it, many of whom went on to distinguished computing careers.

Reviewing ENIAC in Action, I wrote:

Today’s parallel to the ENIAC-era big calculation is big data, as is the notion of “discovery” and the abandonment of hypotheses. “One set initial parameters, ran the program, and waited to see what happened” is today’s The unreasonable effectiveness of data.  There is a direct line of scientific practice from the ENIAC pioneering simulations to “automated science.” But is the removal of human imagination from scientific practice good for scientific progress?

Similarly, it’s interesting to learn about the origins of today’s renewed interest in, fascination with, and fear of “artificial intelligence.” Haigh, Priestley and Rope argue against the claim that the “irresponsible hyperbole” regarding early computers was generated solely by the media, writing that “many computing pioneers, including John von Neumann, [conceived] of computers as artificial brains.”

Indeed, in his A First Draft of a Report on the EDVAC—which became the foundation text of modern computer science (or more accurately, computer engineering practice)—von Neumann compared the components of the computer to “the neurons of higher animals.” While von Neumann thought that the brain was a computer, he allowed that it was a complex one, following McCulloch and Pitts (in their 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity") in ignoring “the more complicated aspects of neuron functioning,” he wrote.

Given that McCulloch said about the “neurons” discussed in his and Pitts’ seminal paper that they “were deliberately as impoverished as possible,” what we have at the dawn of “artificial intelligence” is simplification squared, based on an extremely limited (possibly non-existent at the time) understanding of how the human brain works.

These mathematical exercises, born out of the workings of very developed brains but not mimicking or even remotely describing them, led to the development of “artificial neural networks” which led to “deep learning” which led to general excitement today about computer programs “mimicking the brain” when they succeed in identifying cat images or beating a Go champion.

In 1949, computer scientist Edmund Berkeley wrote in his book, Giant Brains or Machines that Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

Haigh, Priestley and Rope write that “…the idea of computers as brains was always controversial, and… most people professionally involved with the field had stepped away from it by the 1950s.” But thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”

Most computer scientists by that time were indeed occupied by less lofty goals than playing God, but only very few objected to these kinds of statements, and to Minsky receiving the most prestigious award in their profession (for establishing the field of artificial intelligence). The idea that computers and brains are the same thing, today leads people with very developed brains to conclude that if computers can win in Go, they can think, and that with just a few more short steps up the neural networks evolution ladder, computers will reason that it’s in their best interests to destroy humanity.

On February 15, 2011, IBM’s computer Watson commented on the results of his match the previous night with two Jeopardy champions: “There is no way I’m going to let these simian creatures defeat me. While they’re sleeping, I’m processing countless terabytes of useless information.”

The last bit, of course, is stored in “his” memory under the category “Oscar Wilde.”

Follow me on Twitter or LinkedInCheck out my website