TIME magazine called him

“the unsung hero behind the Internet.” CNN called him “A Father of the Internet.” President Bill Clinton called him

“one of the great minds of the Information Age.” He has been voted history’s greatest scientist

of African descent. He is Philip Emeagwali. He is coming to Trinidad and Tobago

to launch the 2008 Kwame Ture lecture series on Sunday June 8

at the JFK [John F. Kennedy] auditorium UWI [The University of the West Indies]

Saint Augustine 5 p.m. The Emancipation Support Committee

invites you to come and hear this inspirational mind

address the theme: “Crossing New Frontiers

to Conquer Today’s Challenges.” This lecture is one you cannot afford to miss. Admission is free. So be there on Sunday June 8

5 p.m. at the JFK auditorium UWI St. Augustine. [Wild applause and cheering for 22 seconds] The computer textbooks of the 1980s told the

readers that the fastest computer in the world

must be powered by only one isolated processor. On the Fourth of July 1989,

I discovered that the fastest computer in the world

must be powered by thousands or millions or even billions

of commodity-off-the-shelf processors that were tightly-coupled to each other

that were identical to each other and that shared nothing

between each other. That discovery made the news headlines

and has been embraced by all computer scientists. That discovery is the vital technology

that underpins every supercomputer. I’m Philip Emeagwali. To discover

is to change the narrative of science. In my quest for the Holy Grail

to the fastest supercomputers, I focused on the Second Law of Motion

of physics that was discovered

three centuries earlier which, however, had existed

since the Big Bang explosion that occurred

13.8 billion years ago. Back in the early 1980s,

I re-examined textbooks that described how

the Second Law of Motion of physics was encoded

into a system of coupled, non-linear, time-dependent, and three-dimensional

partial differential equations of calculus

that governs three-phase flows of crude oil, injected water,

and natural gas that were flowing one mile deep

underneath a production oilfield that is the size of a town. During my supercomputer research,

I re-examined mathematical physics textbooks

that described how the Second Law of Motion

of physics was codified from the algebraic equation

to the differential equation. What I discovered was an epiphany. I discovered that

in its most important application namely, the recovery of crude oil

and natural gas from production oilfields,

that the Second Law of Motion of physics was incorrectly represented. I discovered that

each of the nine partial differential equations

within the system of coupled, non-linear, time-dependent, and three-dimensional

partial differential equations encoded into petroleum reservoir simulators

incorporated only three partial derivative terms. Those three calculus terms

corresponded to three physical forces and none corresponded to

the fourth physical force that actually exists

in the oil field being simulated. I discovered that

those three physical forces could not equate to the actual four forces

inside all production petroleum reservoirs. My contribution

to mathematical knowledge is this:

I corrected those mathematical errors and I corrected them

by adding 36 partial derivative terms that corresponded to

and accounted for the 36 components

of the erroneously missing inertial forces. That was how I invented

nine partial differential equations that are the most advanced equations

in mathematics and the most important expressions

in calculus. I’m hopeful that

the nine partial differential equations that I contributed to mathematics

will remain accurate over the centuries. The Philip Emeagwali

system of partial differential equations should remain accurate because

they encode the Second Law of Motion of physics

that, in turn, did not change since the Big Bang explosion

that is the beginning of time for our universe. As a research computational mathematician

in quest for previously unseen

partial differential equations, my research perspective

was diametrically opposite to that of an applied mathematician

that only wants to analyze known partial differential equations. In the 1980s, I attended 500

weekly research seminars with the first half of those seminars

occurring in the metropolitan areas of Washington, District of Columbia

and Baltimore, Maryland. Half of the seminar speakers

were research mathematicians that came from faraway places,

such as Moscow (Russia), Paris (France), and London (England). During those seminars,

I observed that research mathematicians either focused their analysis on known

partial differential equations that has been described

in calculus textbooks or they were scribbling

partial differential equations that has been scribbled before

on a blackboard or coded before into a motherboard. I observed that research mathematicians

of the 1970s approached initial-boundary value problems

from only one direction. That direction was to and from

the mathematician’s blackboard. The iconic Navier-Stokes equations

is the favorite system of partial differential equations

of the mathematical physicist. Being a physicist and a mathematician

and a supercomputer scientist, I simultaneously approached

my parallel processing research on how to solve

the most computation-intensive algebraic approximations

that arose from finite difference discretizations

of partial differential equations and how to solve them

from four directions. My four directions

were from the storyboard of the physicist

to the blackboard of the mathematician to the motherboard

of the computer scientist and across the motherboards

of the research supercomputer scientist. On the Fourth of July 1989,

I became the first parallel supercomputer scientist

to record the world’s fastest calculations. As the first parallel supercomputer scientist,

I was mandated to solve the Grand Challenge Problem

of physics and mathematics and to solve it

by parallel processing the Grand Challenge Problem

as sixty-five thousand five hundred and thirty-six [65,536]

initial-boundary value problems of extreme-scale

computational fluid dynamics. My grand challenge was to figure out

how to chop up that real world problem of extreme-scale algebra

and chop it up into 64 binary thousand

smaller initial-boundary value problems and, most importantly, figure out

how to, subsequently, parallel process those computational physics problems

and how to do so across my two-raised-to-power sixteen processors

that were tightly-coupled to each other and that shared nothing

between each other. In the 1970s and ‘80s,

I walked along a technological trail that was orthogonal

to the trail that was walked by the vector processing

supercomputer scientist. I walked alone. I walked through the darkness

that was the unknown world of the massively parallel supercomputer

that was the precursor to the modern computer. Metaphorically speaking,

I walked within the unknown territory of the massively parallel supercomputer

and I walked with only a small lamp to see by. That lamp was the most massively

parallel ensemble of processors, ever built. The reason I was left alone

to discover how to make an ensemble of one million processors

solve one million problems at once was that it was then said that

parallel processing is a huge waste of everybody’s time. I walked through darkness

and into the light and did so with equations. How are modern supercomputers used? Nine in ten parallel processing cycles

were consumed by extreme-scaled computational physicists. Their grand challenges

include executing computational fluid dynamics codes

that had the Navier-Stokes equations at their calculus core

or executing the petroleum reservoir simulator

used to discover and recover otherwise elusive crude oil

and natural gas and the general circulation model

used to foresee otherwise unforeseeable global warming. At the granite cores

of most real world problems arising in computational physics

is the system of coupled, non-linear, time-dependent, and three-dimensional

partial differential equations of calculus

that contains partial derivative terms that represented something

in the physical problem the equations govern. Parallel processed supercomputing

is the Formula One of science and technology. The fastest supercomputer in the world is

ten million times faster than your computer. The fastest supercomputer

is powered by 10,649,600 cores that were totaled across 40,960 nodes. The supercomputer of 1946

was rated at 5,000 cycles per second that could be executed

during an arithmetical operation on a 10-digit number. Today, the parallel supercomputer that can

record a speed of one exaflops could be manufactured. The flop is the acronym

for floating-point arithmetical operations per second. Exascale supercomputing is achieved

by massively parallel processing at the speed of one billion

billion floating-point arithmetical operations per second. That speed of supercomputing

is equivalent to a quintillion, or ten-raised-to-power-18

calculations per second. The fastest supercomputer speeds

make it possible to create extreme-scale and high-fidelity computational

fluid dynamics simulations. Like any technology,

the parallel supercomputer is a double-edged sword

that can be used to do both good and bad things. The supercomputer is a vital instrument that

is used to execute computational fluid dynamics codes

that model blood flowing through the human cardiovascular system. The supercomputer that can be used for

computational medicine and used to understand

how to increase human longevity can also be used to design

weapons of doom. The parallel supercomputer

is used to design bombs that are more than 3,000 times

more powerful than the atomic bomb that was dropped upon the Japanese city

of Hiroshima. On August 6, 1945, that atomic bomb killed

166,000 Japanese. Because supercomputers

are used to simulate nuclear explosions over cities like New York,

the U.S. is reluctant to sell American-made supercomputers

to [quote unquote] “unfriendly nations.” This security threat is the reason

the U.S. Department of Commerce vehemently objects

whenever Japan sells a supercomputer to a nation that is unfriendly

to the United States. This was the origin of the infamous supercomputer

denial list that had been in existence

since the 1950s when it was against the law

to export an American supercomputer to the Soviet Union. This, in part, is the reason

that in the 1980s I was the only Nigerian

that was supercomputing within U.S. nuclear research laboratories. My contribution to mathematics

that was the cover story of the May 1990 issue

of the SIAM News—the flagship bi-monthly news journal

of the research mathematics community—was that I—Philip Emeagwali—discovered

nine as-yet-unknown partial differential equations

that weren’t in any calculus textbook. I figured out how to solve those

partial differential equations and solve them across

a new internet that is a new global network of

sixty-five thousand five hundred and thirty-six [65,536]

central processing units, or across as many tiny computers. I am the research computational mathematician

that discovered the fastest supercomputer speed

that can be harnessed to solve a system of coupled, non-linear,

time-dependent, three-dimensional, and three-phased

partial differential equations of calculus. I discovered how to solve

that initial-boundary value problem that is posed on the blackboard

of the mathematical physicist. I figured out how to translate

the partial differential equations of calculus

that I invented as partial difference equations

of algebra that I coded as a set of floating-point

arithmetical operations that I message-passed

to an ensemble of 64 binary thousand tightly-coupled, identical processors

each solving as many latency-sensitive problems. I figured out how to translate

the Grand Challenge Problem of physics and mathematics

and translate it into an equivalent set of

a million less challenging problems. I figured out how to translate

the Grand Challenge initial-boundary value problem

and do so across different boards. I figured out how to translate

the Grand Challenge Problem and translate it from the blackboard

of the mathematician to the motherboard

of the computer scientist. I figured out how to parallel process

the Grand Challenge problem and solve it across the motherboards

of the supercomputer scientist. From the Fourth of July 1989,

I began communicating my discovery of practical parallel processing

to the public. In 30-seconds, my contributions

to mathematics and physics is this:

The petroleum reservoir simulator that must be used to recover otherwise elusive

crude oil and natural gas provides correct answers

to incorrect equations. My contribution is this:

I figured out how to derive correct answers

to correct equations and how to solve

those Grand Challenge equations on a supercomputer

and solve them across an ensemble of millions

of tiny computers that outline a new internet. Back in the 1980s,

I mathematically diagnosed the critical errors in the MARS Code,

the petroleum reservoir model that was developed by

Exxon Corporation. Some years later, Exxon Corporation

merged with Mobil Corporation and both were renamed

Exxon-Mobile Corporation. The MARS code

is a complex petroleum reservoir simulator. The acronym MARS

stands for Multiple Application Reservoir Simulator. Mathematical physicists

at Exxon-Mobile Corporation and in places like

the Niger-Delta oilfield of the southeastern region of Nigeria

must use the oil and gas flow patterns within a production oilfield. Petroleum geologist

must use that flow pattern to decide

where to drill a water injection well and to decide

how many oil and gas production wells to drill. Petroleum reservoir modelers

use that flow pattern to know in advance

how to maximize the production of crude oil and natural gas

that will be extracted from a group of wells,

and to know in advance how and where

to apply enhanced oil recovery techniques,

or the secondary techniques that must be used

to discover and recover otherwise elusive crude oil

and natural gas. At its calculus core, the MARS code

includes the pressure equation and saturation equation. Both equations are part of the system

of partial differential equations that governs the motions

of the crude oil and natural gas flowing from water injection wells

towards oil and gas production wells. My contribution

to mathematics and physics is this:

I discovered the critical errors that mathematical physicists made

when they were solving the system of

partial differential equations that must be used

to discover and recover crude oil and natural gas. That mathematical discovery

inspired me to invent the nine Philip Emeagwali

partial differential equations of calculus. My contributions to calculus

has rich and fertile consequences for the petroleum industry

and is the reason one in ten parallel supercomputers

are purchased by the industry. My contributions to calculus

was the reason I was the cover story

of top mathematics publications, such as the May 1990 issue

of the SIAM News. The SIAM News

is the flagship publication of the mathematics community. Calculus is a tool that is used to answer

the biggest questions arising in science and engineering,

such as: “How do we recover

otherwise elusive crude oil and natural gas

and recover them from soon-to-be-abandoned oilfields?” Like the quadratic formula

of algebra, each partial differential equation

of calculus must be derived. The partial differential equation

we derived or discovered depends on the fundamental law

of physics, or the processes, or the multi physics scenarios,

we encoded into that equation. We discovered the predator-prey

ordinary differential equations and used them to describe

how two species interact. We discovered

partial differential equations in mathematical finance. I discovered my nine

partial differential equations of calculus

and I discovered them by not following the instructions

in the calculus textbooks. The discovery is made

by not following instructions. By definition, it’s impossible

to discover parallel processing and do so by only experimenting with only

one processor. On the Fourth of July 1989,

I discovered practical parallel processing

and I did so by experimenting across a new global network of

65,536 commodity processors that I visualized as a new internet. The research mathematician

is searching for something never-before-seen. More often than not,

that thing is a published paper which contains no discovery

and contains no invention that benefits humankind. In academia, a published paper

is rewarded. A mathematical discovery

that benefits humankind is one million times rarer

and is not rewarded in proportion to the effort

required to discover it. For this reason,

the research mathematician in academia only asks questions that are important

to his career. The research mathematician

asks questions that are direct and centered

on abstract mathematics, not questions that are central

on extreme-scaled parallel processed solutions

of the real world problems arising in mathematical physics. In the second half of the 1970s,

I was a research mathematician amongst research physicists

and research supercomputer scientists. In the first half of the 1980s,

I was a physicist amongst mathematicians

and supercomputer scientists. In the second half of the 1980s,

I came of age as an extreme-scaled parallel processing

supercomputer scientist that was amongst

computational physicists and computational mathematicians. That sixteen-year-long quest

was the reason my experimental discovery

of parallel processing made the news headlines

in various industry publications. Looking back to the 1970s and ‘80s,

I knew there were no easy partial differential equations

waiting for me to invent them. It is rare for a mathematician

to invent a never-before-seen

partial differential equation. It is rarer for that equation

to make the news headlines. In the cover story

of the May 1990 issue of the mathematician’s newspaper,

called the SIAM News, I said that I invented

36 partial derivative terms of calculus. I also said that I invented

36 algebraic terms that corresponded to those

36 partial derivative terms. Those 36 partial derivative terms represented

the temporal and convective inertial forces

that, in part, moves crude oil, injected water, and natural gas

and moves them from water injection wells

towards oil and gas production wells. Those thirty-six partial derivative terms

that I invented can be used to correct the critical errors

in the mathematical techniques that were used to discover and recover

otherwise elusive crude oil and natural gas, namely,

the governing system of partial differential equations of calculus. If uncorrected, those thirty-six errors

will replicate themselves across the trillions upon trillions

of the system of equations of algebra that were derived from discretizing

the governing system of partial differential equations

that were at the mathematical core of the petroleum reservoir simulators

that are used to discover and recover crude oil and natural gas. My contribution to mathematics

was to install those patches of 36 partial derivative terms

and to add them to the pre-existing 45 partial derivative terms. Those 36 errors occurs

at three levels, or as errors in the partial differential equations

that, in turn, become errors in the system of

partial difference equations that were derived from the discretized partial

differential equations. They also become errors

in the supercomputer algorithms that must be executed across

millions upon millions of processors. The new calculus and new algebra

that I contributed to mathematical knowledge

was extremely difficult to invent. In parallel processed

computational mathematics, ranging from

petroleum reservoir simulation to general circulation modeling

of global warming, the trillions upon trillions

of Xs and Ys of the underlying extreme-scale algebra

had their origin from the partial differential equations

of calculus that, in turn, originated from

and encoded corresponding laws of physics. A mathematical analysis

is akin to substituting thoughts and prayers

for experiments across millions upon millions of processors. On the Fourth of July 1989

in Los Alamos, New Mexico, United States,

and fifteen years after I began supercomputing in Corvallis, Oregon, United States,

I experimentally discovered that the toughest real world problems

arising in computational physics could be solved across

a new supercomputer that is configured as 65,536 processors

that tightly-encircled a globe and encircled that globe

as a new internet and encircled that globe in the manner

the internet encircles a bigger globe, namely,

planet Earth. Parallel supercomputing is,

in and of itself, almost a branch of mathematical physics, now called extreme-scale computational

physics. Without mathematics, computer science becomes

computer faith. I had to be a research mathematician

to be able to invent the new partial differential equations

and the corresponding partial difference algorithms

that I discovered. My contribution to mathematics

was to discover how to execute them across

a new internet. They were two things

that I did with my data. First, I copied them

from one processor to another processor and I copied them via email messages. Second, I computed with them

at the slow speed of 47,303 calculations per second

per processor and I did so to reach the

aggregated speed that was, for the first time, faster than

the speed of any vector processing supercomputer. Put differently, my contribution

to extreme-scale computational mathematics

did not reside on the processor that was not a member

of an ensemble of processors. My contribution to mathematics

reside on the processor that is a member of an ensemble

of processors and also resides

on the entire ensemble itself. Yet, my parallel processing experiment

had to wait until the 1980s when 65,536 processors

became available for me to experiment with. I say that a petroleum reservoir model that

runs on three, instead of on four, forces

is akin to driving your car on three wheels

and with the fourth tire deflated. The lesson that I learned is that

you must be a polymath, not a mathematician,

to solve the multi-disciplinary Grand Challenge Problem

that is beyond the frontiers of arithmetic, algebra, and calculus. The reason I could move back and forth

from the blackboard to the storyboard

is that I am a research mathematician and a research physicist. I knew the four forces

that defined the Second Law of Motion of physics

when applied to oilfields and knew that law,

forward and backward, and knew how to encode that law

into a system of nine coupled, non-linear, time-dependent,

and three-dimensional partial differential equations

of calculus that governs the three-phase flows

of crude oil, injected water, and natural gas

that is flowing across an oilfield that is a mile deep

and that is the size of a town. To solve the

Philip Emeagwali Equations that are my contributions

to mathematics and do so across a new internet

that is a new global network of 64 binary thousand processors

demanded that I discretize the problem domain

of the initial-boundary value problem. To discretize the problem,

I approximated continuous space with discretized space, or a finite grid. My new system of

partial difference equations of algebra

are the discrete versions of my new system of

partial differential equations of calculus that I invented. As a research mathematician

that is also a research physicist and that is also

a research supercomputer scientist, my interdisciplinary knowledge

was the necessary tool that gave me the intellectual maturity

that I needed to correct the century-old critical errors

that I found in calculus textbooks that were written

for the petroleum industry. Those errors in calculus

found their way from the classroom to the petroleum reservoir simulator

used by Exxon-Mobil Corporation. I should mention that

when I discovered that new calculus, or the Philip Emeagwali Equations,

I had to create new algorithms that led me to new algebra

that, also, codified the Second Law of Motion of physics. Inventing an equation

is like making your words a part of the holy scripture. The Philip Emeagwali Formula

was not for the blackboard alone. Nor was it for the motherboard alone. The Philip Emeagwali Formula

was invented for parallel processing across my sixty-five thousand

five hundred and thirty-six [65,536] tiny computers, or as many processors,

that encircled a globe in the way the Internet

encircled planet Earth. The Philip Emeagwali Formula

made the news headlines in 1989 and was highlighted

in the June 20, 1990 issue of The Wall Street Journal. Eleven years later,

that Philip Emeagwali Formula was reconfirmed

by then U.S. President Bill Clinton and reconfirmed

in his presidential speech of August 26, 2000. The parallel supercomputer

is a disruptive technology that gives tech companies

some competitive advantage in their drive for market leadership. The roots of the story

of how the fastest supercomputer was invented

began several millennia ago, and began when our ancestors

had no computing aid. For millennia, our ancestors

used their fingers and toes as their computing aids

and had no mathematical symbols scribbled on their cave walls. For the last one hundred years,

the word “computer” was prefaced as human computer, analog computer, electronic computer,

digital computer, distributed computer, parallel computer, and super computer. A change in how we look at the computer was

accompanied by renaming the computer. The paradigm shift in supercomputing manifested

itself as a change in the name of the technology,

such as changing from sequential processing

that began with computing aids, such as the abacus

that was invented 3,000 years ago, to the parallel supercomputer

that became the world’s fastest computer when I discovered it

on the Fourth of July 1989. Over the centuries,

we changed the ways we counted. We from

the Table of Logarithms to a mechanical calculator

to automatic computers that used vacuum tubes. And then our computing paradigm shifted to

transistors embedded in integrated circuits. On the Fourth of July 1989,

I figured out how to record an increase

in computing speeds and do so across a new internet

that is a new global network of 64 binary thousand

tightly-coupled processors that were simultaneously solving

the Grand Challenge Problem that I chopped up

into 64 binary thousand problems. That invention,

called parallel processing, triggered a paradigm shift

in how computers are designed and defined. That invention changed the way

we look at the computer. The new computer

changed from computing only one thing at a time

to computing many things at once. In 1989, I was in the news because

I figured out how the new computer can solve in one day

a grand challenge problem that the old computer

needed 180 years, or 65,536 days, to solve. It’s impossible to fully describe

how I felt the moment I experimentally discovered

parallel processing. At a visceral and intellectual level,

I felt like I was a part of human progress

that was bigger than myself. My discovery

of practical parallel processing felt like I caught a fish

that was bigger than myself. My discovery of parallel processing

was computing’s equivalence of reaching the top of Mount Everest

and being the first person to reach that summit. My invention

is the subject of school reports because it is a contribution

to the development of the computer. That invention

redefined the word “computer.” In the new definition

for the twenty-first century, the computer is a machinery

that is powered by an ensemble of up to millions upon millions of processors,

with each processor akin to a tiny computer

that shared nothing. I believe that our children’s children

could parallel process across their Internet

and do so to upgrade their 22nd century’s Internet

to that century’s supercomputer that should be

a planetary-sized supercomputer. I invented a new internet

that I theorized as the granite core of a new supercomputer. In 1989,

I was in the news headlines because I figured out how to reduce

180 years of time-to-solution on one computer

that was powered by only one processor to only one day of time-to-solution

on a supercomputer that was powered by 64 binary thousand processors. My contributions to geology, mathematical

physics, and supercomputing

is this: I figured out how to compute faster

and do so to discover and recover otherwise elusive crude oil

and natural gas. Back in the 1980s,

practical parallel processing was an uncharted territory

of human knowledge and a new frontier without a map. The marriage of

partial differential equations and massively parallel processing

was pretty abstract to grasp but amazingly powerful. In weather forecasting,

solving the difficult-to-calculate primitive equations of meteorology

tells the weather forecaster tomorrow’s forecast. Back in the 1970s and ‘80s,

to parallel process across an internet

was the most complicated concept and the hardest area

of computational mathematics. If you’re the first person

to parallel process and to solve the toughest math problems, you

will be ranked as the world’s smartest person. Back in the 1980s,

25,000 vector processing supercomputer scientists avoided

this grand challenge problem and did so because

it was ridiculously difficult to solve. The precursor

to the grand challenge problem that I solved on July 4, 1989

was first posed in a science fiction story that was published on February 1, 1922. My contribution to physics was that,

on the Fourth of July 1989, I discovered

how to turn that science fiction, called parallel processing,

that then 66-year-old Albert Einstein presumably read about in the January 11, 1946 issue

of the New York Times and how to turn that science fiction

into a non-fiction that is the vital technology

that makes the supercomputer super. That grand challenge problem

that was at the crossroad where mathematics, physics,

and supercomputing met remained unsolved

for the sixty-seven years onward of 1922. That grand challenge problem

was unsolved until I solved it on the Fourth of July 1989. Parallel processing—or solving several problems at once—upended

the paradigm of sequential processing in which only one problem

is solved at a time. Back in 1989, I was asked:

“How is the new computer different from the old computer?” I answered:

“The old sequential processing computer processed only one problem at a time. The new parallel processing computer

process a million problems at once.” As a research supercomputer scientist

that was on a decade and half long quest for the new

parallel processing computer, my magical resonance

occurred on my Eureka moment of 8:15 in the morning of

the Fourth of July 1989 in Los Alamos, New Mexico,

United States. That magical resonance occurred

because I discovered that my new global network of

64 binary thousand processors that shared nothing between each other

can be harnessed as one virtual supercomputer

that is a new internet. The lesson that I learned

from my discovery of that new internet was that supercomputer wizardry

is the craft of looking inside that new internet to change its outside

and redefine it as a new computer. To invent the Philip Emeagwali Formula

that enables supercomputers to compute fastest

that then U.S. President Bill Clinton described in his White House speech

of August 26, 2000, I visualized myself as a cockroach

that was crawling along sixteen mutually perpendicular directions

and doing so to traverse sixteen times two-raised-to-power sixteen,

or one binary million, bi-directional paths

within my new internet that I also imagined within my imaginary

sixteen-dimensional universe. I invented the Philip Emeagwali Formula

and I did so by visualizing myself as the extreme-scaled

computational physicist that was living

in a sixteen-dimensional universe. I visualized myself as the conductor

of 64 binary thousand processors. I visualized myself as orchestrating

the massive computations that I simultaneously executed

on each of my two-raised-to-power sixteen, or 65,536,

commodity-off-the-shelf processors. That was how I discovered

how to harness the millions of processors

within the world’s fastest supercomputers and how to harness them

to solve the toughest problems arising in algebra, calculus, and physics. My discovery

that occurred on the Fourth of July 1989 was that the fastest supercomputer

in the world must and can massively parallel process

Grand Challenge Problems. That discovery made the news headlines

because I recorded the fastest speed across my new internet,

instead of recording it within a new computer. My new internet

was a new global network of commodity-off-the-shelf processors. Those processors

were identical to each other. Each processor operated

its own operating system. Each processor

had its own dedicated memory that shared nothing. The essence

of my supercomputer discovery was that I achieved a magical resonance

and that I broke Amdahl’s Law Limit that limited

practical parallel processing speed increase

and limited it to a factor of eight. I broke Amdahl’s Law Limit

for solving Grand Challenge Problems and I broke that limit

by the factor of 65,536 fold speed increase

that I experimentally recorded, as well as the factor of infinity

that I theorized. Since April 1967, Amdahl’s Law Limit

was perceived as the fundamental limit to the speed increase

that can be recorded across any large ensemble of processors

that was used to tackle the toughest problems

arising in science and engineering, such as executing

a century-long computer modeling to foresee otherwise unforeseeable global

warming. In the 1980s, supercomputing wizardry

was to make the impossible-to-compute possible-to-compute

and to do so while solving Grand Challenge Problems

and solving them by simultaneously sending and receiving

65,536 emails at once. I sent and received each email

to the sixteen-bit long email addresses of my new internet

that was a new global network of two-raised-to-power sixteen processors

that were along one of my sixteen mutually perpendicular directions

in as many dimensions. My contribution to the development

of the modern computer is this: I invented the Philip Emeagwali Formula

that then U.S. President Bill Clinton described in his White House speech

of August 26, 2000. I invented my parallel supercomputer formula

to be used to solve real world problems and used to solve them

65,536 times faster and used to solve them

across a global network of 65,536 processors

that were each akin to a tiny computer. My invention of parallel processing

made the news headlines because I invented the technology

and I did so by sending and receiving emails

and delivering those emails one binary million times faster

and delivering those emails across as many email wires. The parallel supercomputer

was theorized as far back as February 1, 1922. But the technology was only theorized

as a science fiction. For the sixty-seven years,

onward of 1922, parallel processing was debated

and ridiculed as a beautiful theory that lacked experimental confirmation. Practical parallel processing remained

in the realm of science fiction until my experiment of July 4, 1989

that made the news headlines upgraded the theorized supercomputer

to a non-fiction. I was in the news headlines because

I brought that figment of the imagination

—called parallel processing— and brought the technology

from dream to reality. That parallel processing controversy

was highlighted in an article in the June 14, 1976 issue

of the Computer World magazine. That article scorned parallel processing

and mocked the then unproven technology

as a huge waste of everybody’s time. The parallel supercomputer

is an invention that makes the world a more knowledgeable place

and a better place for human beings and for all beings. The parallel supercomputer

made me a benchmark in the history of the development

of the computer. Since the first programmable supercomputer

was invented in 1946, each supercomputer manufactured

was faithful to its primary mission, namely, to solve

the most extreme-scale problems arising in computational physics

and to increase the productivity in industries that use supercomputers,

and to reduce the time-to-solution of grand challenge climate models

and to reduce the time-to-market of the crude oil and natural gas

that were buried one mile deep in the Niger Delta oilfields

of southeastern Nigeria. As a research mathematician

I thought in infinite dimensions. I thought in sixteen, and higher, mathematical

dimensions and I did so to geometrically visualize

the hypersurface of a hypersphere. In contrast, the non-mathematician

can only see the two-dimensional surface

of a three-dimensional sphere. Back in the 1980s

and in Los Alamos, New Mexico, United States,

and as the first massively parallel supercomputer scientist,

I had to mathematically see the fifteen-dimensional hypersurface

that had my two-raised-to-power sixteen processors that tightly-encircled a globe. I visualized

those commodity-off-the-shelf processors as evenly distributed

across that hypersurface. The wizardry

of that first supercomputer scientist resides in theorizing

a never-before-seen internet that is a new

global network of processors and in visualizing

how that new internet can be super-computerized. That first supercomputer wizard discovered

that new internet as a never-before-seen

supercomputing machinery that seamlessly and cohesively

communicates as a unit and computes at the fastest

parallel processed speed possible. Back in the 1970s,

parallel processing was ridiculed as a beautiful theory that lacked an experimental

confirmation. I was mocked by vector processing

supercomputer scientists who believed that I was attempting

to make the impossible-to-compute possible-to-compute. The main argument that was used

to attack parallel processing was this: If a global network of

65,536 processors that shared nothing

was used to solve a grand challenge problem

that was chopped up into 65,536 smaller problems

then the computer spaghetti code for solving each problem

as well as the primitive emails for communicating each computer code

will fall out like bolts which fastened an airplane very loosely. The skeptics of parallel processing argued

that those loose bolts could not be detected

until the airplane flies beyond the speed of sound. In supercomputing,

the equivalence of the speed of sound is the maximum speed

of the fastest vector processing supercomputer ever built. On the Fourth of July 1989,

in Los Alamos, New Mexico, United States,

I became the first person to break that supercomputer speed record. For that contribution,

the name Philip Emeagwali became a benchmark

in the history of the development of the modern computer. I am often asked to describe

how I want to be remembered? I want to be remembered

for my contributions to science. I did extensive video shoots because

I want posterity to know what I sound and look like. Two thousand three hundred [2,300] years ago,

Euclid, the father of geometry, lived in Africa

and in a predominately black city. There is no record that Euclid

once travelled outside Africa. Yet, it is assumed that Euclid

is white and of Greek ancestry which is as odd as assuming that

a historical figure in ancient Greece, such as Julius Caesar,

is black and African. My photos and videos will show posterity

that Philip Emeagwali is black and born in sub-Saharan Africa. What if the Igbo-born slave

Olaudah Equiano who fought against slavery

was white? Would Olaudah Equiano

have entered into Nigerian school textbooks? What if William Wilberforce

was a black African? Would William Wilberforce

have been deleted from the Nigerian school textbooks?” My discovery

of practical parallel processing had been absorbed

into general knowledge of the supercomputer. The impact of my contributions

to the development of the computer can be measured by yardsticks

such as the number of school reports on contributions to the development

of the computer that mentions Philip Emeagwali. On the gravestone,

you cannot distinguish between an astronomer that discovered a planet in

the solar system and one that discovered

only a rock in his backyard. And by the end of this century,

the one million active research scientists will be forgotten

just as the one million before them were forgotten. The reason is that

only one in a million scientist have an after-life

as the subject of school reports. Those school reports, in turn,

are what gave 16th century Galileo Galilei

and 17th century Isaac Newton immortality. The school reports on Euclid,

the father of geometry that lived 2,300 years ago

in Africa, are more durable than

a bronze monument of Euclid. Immortality is maintained

on the lips of school children. The spirit of the inventor

will forever be embodied within her invention. The inventor and her invention

are forever intertwined. I am in school reports

and I believe that I will be in school reports

for as long as my contributions to the development of the computer

and the Internet remain relevant. For me, Philip Emeagwali,

my discovery that occurred on the Fourth of July 1989

of practical parallel processing as the invention that underpins

every supercomputer has kept

and will continue to keep my name in school reports. That contribution will continue

to keep my name in circulation around the Internet. The parallels between

my supercomputer and an internet is this:

My supercomputer encircled a globe that has a diameter

of eight thousand (8,000) inches. The internet encircled planet Earth

that is globe that has a diameter

of eight thousand (8,000) miles. Both my supercomputer and an internet

are global networks of processors. The difference is that my supercomputer

that is an internet is constructed systematically

while the internet grew incrementally and organically

and grew at different times and places. For this reason—namely, the lack of uniformity

and regularity—the internet, as we know it today,

cannot be the hoped-for planetary-sized supercomputer

that could ever be harnessed to find answers to the biggest questions

facing humanity. If such a planetary-sized supercomputer

can be constructed by our descendants they could harness it

to solve their grand challenge initial-boundary value problems,

such as those governed by the primitive equations of meteorology

and other geophysical fluid dynamical problems

arising in their extreme-scale computational physics. Please allow me to describe

the Eureka moment that I discovered, namely, that practical parallel processing

will bring into existence a new supercomputer

that will replace the old vector processing supercomputer. That was the moment that I understood

my constructive reduction to practice of the massively parallel supercomputer

to be the vital technology that must underpin every supercomputer that

will be manufactured in the future. It was 8:15 in the morning

of the Fourth of July 1989 and across a new internet

that was a new global network of 64 binary thousand processors. Each processor was akin

to a tiny computer. I was speechless because

I had recorded a previously unrecorded supercomputer speed

of 3.1 billion calculations per second. I was shocked and I starred

in awed silence and disbelief. “3.1 billion is impossible,”

I kept saying to myself. My recording of that previously unrecorded

supercomputer speed of 3.1 billion calculations per second implied

that a general circulation model used to foresee otherwise unforeseeable climate

changes that formerly took 180 years to run at computer speeds of

forty-seven thousand three hundred and three (47,303)

calculations per second per central processing unit

can now be computed in only one day across a new internet

that is a new global network of 65,536 central processing units. On the Fourth of July 1989,

no supercomputer scientist believed that I could parallel process

3.1 billion calculations per second and parallel process

a grand challenge problem and do so across

the slowest 64 binary thousand processors in the world. Shortly after my Eureka Moment,

it made the news headlines that an African supercomputer genius

in the United States has discovered how to solve

grand challenge initial-boundary value problems

and how to solve them by chopping up each problem

into 65,536 smaller problems. I mapped those smaller problems

in a one-problem to one-processor corresponded manner

and mapped them to my as many processors. My experimental discovery

of the massively parallel supercomputer made the news headlines because

it was magic, wizardry, and science-fiction, back in 1989. Because practical parallel processing

was then believed to be impossible, every vector processing

supercomputer scientist that I told that I had parallel processed

a grand challenge problem believed that I had made

an embarrassing mistake! For three-months, I also wondered

if I had made an embarrassing mistake. In the 1980s,

I massively parallel programmed sixteen ensemble of up to

two-raised-to-power sixteen processors that each tightly-encircled a globe. Each of my ensemble was a new internet

that I visualized as my new global network of

up to 65,536 tightly-coupled and processors that shared nothing. By the late 1980s,

I had parallel programmed more processors

than any person that ever lived. For a decade, the reality was that

the potential to execute the fastest recorded

supercomputer calculation and execute them across

the slowest processors was on my fingertips. It took me nearly a decade

—from the early 1980s to the late 1980s—for parallel processing

to sink in and for me to gain the scientific maturity

that I needed to solve real world problems

and solve them across my new internet

that was a new global network of 64 binary thousand processors. My contribution

to the development of the computer is this:

I figured out how a new global network of 65,536 processors

that outlined a new internet can synchronously communicate together

as a virtual supercomputer and simultaneously compute together

to yield a 65,536-fold jump in supercomputing speed. I was in the news in 1989 because

I figured out how to make the impossible-to-compute

possible-to-compute. The news headlines described me as the Nigerian supercomputer genius

in the United States that figured out how to parallel process

the toughest problems arising in calculus, algebra, and physics. My supercomputer wizardry

resided in the never-before-seen manner that I programmed

my two-raised-to-power sixteen processors. The new knowledge that I contributed

to calculus, algebra, and physics is this:

I discovered how to integrate the smaller pieces

of a grand challenge problem and how to do so across a small internet

that is a new global network of 65,536 tightly-coupled

commodity-off-the-shelf processors with each processor

operating its own operating system and with each processor

having its own dedicated memory that shared nothing between each other. I was surprised to see that

my invention of practical parallel processing

meant a lot to many people. My world’s fastest

supercomputer speed struck a chord

in people across Africa. In the 1980s, the words

“supercomputer” and “internet” was not in the vocabulary

of the African newspaper. It was then a novelty

to read about a Nigerian supercomputer genius

who was at the farthest frontier of human knowledge. It touched their nerves

that I worked alone for sixteen years despite the rejections. I invented practical parallel processing that,

in turn, was a major invention of the 20th century. As a black inventor, I was not allowed

to be the inventor of my invention. My processors,

each akin to a small computer, did not program themselves. I hand coded each computer

with pinpoint precision and wrote its email primitives. I was in the news headlines because

I parallel processed across my new internet

that was outlined by a new global network of

65,536 small computers. Studying physics

is not the most noteworthy contribution to human progress. However, contributing new knowledge

to physics, such as parallel processing, is a noteworthy contribution

to human progress. My contribution to physics

is this: I discovered

how to use the slowest computers in the world

to solve the toughest problems in the world. I discovered how to solve

grand challenge problems and how to solve them

in a one-problem to one-processor corresponded manner

and how to solve them after I had chopped each

grand challenge problem into 65,536 smaller problems. That supercomputer breakthrough

that made the news headlines enabled me to solve in only one day

and across my new internet what formerly would have

taken 65,536 days, or 180 years, to solve on only one computer. I handed coded

my parallel processed solution to the grand challenge problem

of supercomputing and I did so

to deliver the highest performance ever recorded on a supercomputer. At 8:15 in the morning

of the Fourth of July 1989, I was speechless

when I saw the experimental results of my decade-long quest, namely,

the world’s fastest calculation across my new internet

that was my virtual supercomputer. To discover a new equation

is to gaze across the millennia. My contributions

of nine new partial differential equations to modern calculus

and to humanity’s knowledge of mathematical physics

and extreme-scale computational physics was the culmination

of a body of mathematical and scientific contributions

that were made by my mathematical ancestors

and made across the millennia. The oldest recorded contribution

to mathematical knowledge was recorded

three thousand and seven hundred [3,700] years ago. That contribution

was written in a papyrus and written by Ahmes. African geometers, such as Euclid

who is the father of geometry, were influenced by African arithmeticians,

such as Ahmes, who is the first arithmetician

that we know by name. Ahmes lived fourteen centuries

before Euclid and lived in the same region,

that is the Valley of the River Nile in Africa. The introductory geometry

that you studied as a teenager has its mathematical roots

in ancient Africa. Geometry is the contribution

to mathematics of ancient Africa. That mathematical contribution

was historically preserved by Islamic scholars

that studied in North Africa. That contribution

was preserved across the ages and transmitted and built upon

for thousands of years and along the four thousand one hundred

[4,100]-mile-long Valley of the Nile that was the birthplace

of Egyptian civilization. Fast forward two thousand

and three hundred years [2,300] from Euclid. For the record, Euclid

was an African geometer and there is no record

that Euclid ever travelled outside Africa. There is no record that Euclid

is not a black African. Fast forward from Euclid in Africa

to 1989 to another African mathematician

in the United States, Philip Emeagwali. I was the cover stories

of top mathematics publications. My discovery stories

were about my contributions of new calculus, new algebra

and new mathematical physics to mathematical knowledge. My contributions to mathematics

began as a theory, or as an idea that was not positively true

and materialized as the world’s fastest computer. Who invented the internet? I theorized a new internet

that was a new global network of commodity processors

that is a virtual supercomputer or that could be used

to build a new supercomputer that encircled the globe

in the way the internet does. Back in the 1970s and ‘80s,

I was mocked and ridiculed and accused of embarking

on a grandiose and overreaching supercomputer research. I was mocked for wanting to solve

that largest system of equations of a new algebra

and solve it across a small copy of the internet

that I invented. But on the Fourth of July 1989,

I figured out how to harness that new internet

which was a new global network of 64 binary thousand

commodity processors. I was in the news in 1989 because

I figured out how to use that new internet

and use the technology to solve the toughest problems

arising in extreme-scale algebra and arising from the discretization

of the partial differential equation which is the most advanced expression

in calculus and the most important equation

in mathematics. Parallel processing

must be discovered theoretically before it could be discovered

experimentally. Nine in ten supercomputer cycles

are consumed while solving the partial differential equation

of calculus. For that reason, to experimentally discover

the parallel supercomputer is to de facto

solve an initial-boundary value problem arising in geophysical fluid dynamics

and to solve that grand challenge problem across a new internet

that was a new global network of tightly-coupled processors

that shared nothing that encircled a globe

in the manner the internet did. That was the technological achievement

that gave rise to the question: “Did Philip Emeagwali

invent the Internet?” My answer is this:

“I am the only father of the internet that invented a new internet.” The entire internet

that encircled the Earth cannot be created at once

or be invented by one person. I theorized my invention

as a new internet and I did so before I invented it

as a new supercomputer that I used to parallel process

and solve a grand challenge problem that could not be solved

without the massively parallel supercomputer. As a lone wolf research

supercomputer scientist of the 1970s

in Oregon and District of Columbia and of the 1980s in Maryland, Wyoming,

and New Mexico, I had to understand

what I was going to do before I did it. It would have been impossible

for me to send and receive emails along my new global network

of 1,048,576 email wires and send to and receive from

65,536 processors. Each processor

was akin to a small computer. It would have been impossible

for me to send and receive as many computer codes

and do so without my deep understanding of my new supercomputer machinery. Unlike the 25,000 vector processing supercomputer

scientists of the 1980s that misunderstood that machinery

as a computer per se, I understood my new virtual

supercomputer technology to be a new internet

that I visualized as a small copy of the Internet. That technological vision

of a virtual supercomputer that is a never-before-seen internet

was uniquely mine. That contribution

is the reason I am often referred to as one of the fathers of the Internet. I conceptualized a new internet

as a virtual supercomputer. But, more importantly,

it made the news headlines in 1989 that a Nigerian supercomputer genius

in the United States had figured out

how to harness that new internet and how to invent

that computing machinery as the world’s fastest supercomputer. On my Eureka moment of

8:15 in the morning of the Fourth of July 1989,

I felt like I was struck by a bolt of lightning. That day, I became the first person

to enter into a new territory of human knowledge

called practical parallel processing. A common misunderstanding

is that a scientific discovery is teachable

and that a technological invention is learnable. To discover is to know something

that was previously unknown. For that reason, you cannot teach

what is yet to be discovered or what you don’t know. Nor can you learn something

that had never been seen before. The first person to do something

did not learn that thing from the second person

to do that thing. I did not learn

how to parallel process across processors. I invented

the supercomputer that parallel processes across processors

and simultaneously processes a million things at once. When you’re the pioneer

of the new parallel supercomputer that can do a million things at once,

there is no parallel supercomputer scientist

to learn the then non-existent technology from. I am the first parallel

supercomputer scientist. In the 1980s,

I was the only full time programmer of the most massively parallel processing

supercomputer ever built. That is the reason

that to this day I am the only person

that published a full breath lecture series on his contributions to the development

of the modern supercomputer that parallel processes across processors. I am the first parallel

supercomputer scientist. I did not learn

how to parallel process. I invented the parallel supercomputer

and I did so by being the first person

to figure out that the parallel supercomputer

is a million times faster than the vector processor

that is not a member of an ensemble of vector processors. As an inventor,

my dilemma was akin to that of the first person

that flew an airplane. Nobody taught that first pilot

how to fly the first airplane. The first pilot

did not have a license to fly. As the first parallel

supercomputer scientist, I had to have a deep understanding

of my never-before-seen supercomputer. I had to understand my supercomputer,

both forward and backward. My command of those computers

is the reason I have given impromptu supercomputer lectures

and delivered them without lecture notes. The Grand Challenge Problem

of supercomputing was not a one-banana problem. This scientific problem

was listed by the U.S. government as a Grand Challenge

and it was described as the toughest problem in supercomputing. My grand challenge was to figure out

how I could harness the potential supercomputer power

of the slowest two-raised-to-power sixteen processors

that each had its unique sixteen-bit long email address. That email address

was also its unique binary identification number. Each processor

had its own dedicated memory that shared nothing. Each processor

operated its own operating system. To believe that I solved

the grand challenge problem by serendipity, or luck,

is akin to believing that 2,300 years ago 65,536 monkeys

each on a typewriter, bashed out Euclid’s “The Elements,”

that for over two millennia became the all-time best selling

mathematics textbook. At first and in the 1970s,

I visualized the grand challenge problem as 64 binary thousand pieces

of a randomly scrambled puzzle. Each piece

of that supercomputing puzzle had its unique sixteen-bit long

binary identification number, or a unique string of sixteen zeroes

and ones, that was scribbled on it. In 1989, it made the news headlines

that an African supercomputer wizard in the United States

has figured out how to put that puzzle together. I am that African supercomputer scientist

that was in the news back in 1989. In the 1980s, my grand challenge

was to put those 64 binary thousand pieces of parallel processing puzzles together. I figured out

how to put those 65,536 pieces of parallel processing puzzles together

and how to do so in sixteen-dimensional hyperspace,

and how to do so along sixteen mutually perpendicular directions. The modern supercomputer

is powered by about one million processors. Back in the 1980s,

I was the sole full time programmer of the most massively parallel supercomputer

ever built. The reason I was the lone wolf

was that I was the only person that understood the importance

of the parallel supercomputer. That was the reason

supercomputer scientists that won the top prize

in supercomputing won it as a member of a team of up to

fifty (50) supercomputer scientists that were supported

with a billion dollar supercomputer. I was the only person

that won that top supercomputing prize alone and won it as an outsider. The 25,000 vector processing supercomputer

scientists of the 1980s abandoned parallel processing

and did so because they did not believe that

parallel processing should or could power a supercomputer. Who is the father of supercomputing? The father of supercomputing

should at least believe in parallel processing

that is, after all, the vital technology that now underpins

every supercomputer. I am called the father

of the parallel supercomputer because every supercomputer

parallel processes and I am the only father of supercomputing

that invented practical parallel processing. I had to be supremely confident

and know who I am—namely, a research physicist

that was at the frontier of knowledge of extreme-scale computational physics

and also at the frontier of knowledge of the then never-before-seen

massively parallel supercomputer. I was the supercomputer scientist

as well as the internet scientist that broadened his agnostic invention

and did so to make his contributions to the development of the computer

and internet and to make them

to remain as timeless and as evergreen as possible. Back in the 1980s,

I was the lone black face that attended 500 weekly

research seminars. Each seminar speaker

was a research mathematician or a research physicist

or a research computer scientist. Each seminar speaker

was visiting from Europe or Canada or somewhere else

in the United States. For me to religiously attend

and understand those multidisciplinary seminar topics

demanded that I be a polymath that is at home in extreme-scale algebra,

partial differential equations of calculus, and the as-yet-to-be-invented

massively parallel supercomputer. If I wasn’t at the frontier of knowledge

of those sciences I would have discontinued attending

those scientific research seminars and I would not have been

the cover stories of dozens of scientific publications. Prior to my discovery

of how to parallel process across processors that shared nothing between each other,

some research vector processing supercomputer scientists

had a one-to-one conversation with me. There were impressed

with my parallel supercomputer discovery-in-progress. From the 1970s through eighties,

they were impressed enough to describe me as an up-and-coming supercomputer

scientist to be watched. That was the reason,

six American institutions courted me and supported me

with scholarships and fellowships and did so for sixteen continuous years

onward of a scholarship letter that was dated September 10, 1973. After those sixteen years

of study and research in the United States,

my confidence did not come from my winning the top prize

in supercomputing. I won that prize in 1989. My confidence in my intellectual ability

to work alone and to solve

the Grand Challenge Problem of supercomputing

arose because I programmed supercomputers

nearly every day of those sixteen years. I programmed

two-raised-to-power sixteen commodity-off-the-shelf processors

that encircled the globe in the way

the internet does. I message passed, or emailed, across

those 65,536 processors and across sixteen times

two-raised-to-power sixteen email wires. I programmed supercomputers

for sixteen years. On June 20, 1974, in Corvallis, Oregon, United

States, I was programming

the one-time world’s fastest supercomputer that was rated at

one million instructions per second. On July 4, 1989, in Los Alamos,

New Mexico, United States, I discovered the answer

to the grand challenge question of supercomputing. That grand challenge question

was clear cut, namely, “How can I reduce

65,536 days, or 180 years, of time-to-solution

on only one processor that is not a member

of an ensemble processors to only one day of time-to-solution

across a new internet that is a new global network of

65,536 processors?” Put differently,

the grand challenge question was how can I compress

180 computer-years into one supercomputer-day? In 1989,

I was in the news headlines because I provided the first clear cut answer

to that clear cut question. I was in the news headlines because

I articulated my discovery of the parallel supercomputer

as a new internet that I visualized

as a small copy of the internet. I articulated that new supercomputer

with a clarity that was echoic retentive

and I did so when other supercomputer scientists

were providing extremely-nuanced and overly-obfuscated lectures. Research computer scientists

were committing the cardinal sin of publishing abstract papers

that did not explain their contributions to the development of the supercomputer

and their contributions to the ever growing body of knowledge

of modern computer science. In scientific research,

the search is for new knowledge and not for a journal paper. Writing a scientific research paper

is not the finish line. But for an academic,

merely publishing the paper is his finish line. What is Philip Emeagwali known for? My discovery

of practical parallel processing changed the way people perceived me. Parallel processing

changed the way we think. Parallel processing

is an entirely new approach to computer science

and one that ushered a new era in supercomputing. Parallel processing

was the technology that was mocked and ridiculed

as a huge waste of everybody’s time. Parallel processing

is now the vital technology that underpins

the world’s fastest computers and that extends the boundaries

of human knowledge. For me, Philip Emeagwali, my discovery

of the parallel supercomputer was my stepping stone

that enabled me to step from the serial and vector processing supercomputers

of the 1980s and earlier to the parallel supercomputers

of today. Those serial processing supercomputers

became obsolete because they cannot be used to solve

the toughest problems arising in abstract calculus,

large-scale algebra, and extreme-scale, high resolution

computational physics. The supercomputers of the 1980s

cannot accurately solve many real world problems because they only computed

in a step-by-step serial or vector processing fashion

instead of supercomputing in the radically different

parallel processing method of dividing the grand challenge problem

into one million smaller problems and mapping those divided problems

and solving them with a one-problem to one-processor correspondence

and mapping them across an ensemble of one million

commodity-off-the-shelf processors that each operated

its own operating system and that each shared nothing

and solving them at once, or in parallel. Back in the 1970s and ‘80s,

my massively parallel processing supercomputer research

focused on making discoveries rather than on writing about theories. A theory is an idea

that is not positively true. Each year, millions of theoretical papers

are published within the field of computer science

with none contributing to the development of the computer. A vacuous theoretical article

that was never read and that described no discovery

is incentivized over a ground breaking discovery. For that reason, the academic scientist

lacks public stature. As a result of that publish or perish syndrome,

the scientific paper became a distracting background noise. In 1989, I was in the news because

I discovered that parallel processing will become the vital technology

that will make it possible for the supercomputer of today

to be super. I discovered that parallel processing

is the irreducible essence of the modern supercomputer. Parallel processing

is the most important technology within the supercomputer. Parallel processing

redefined the computer and enabled us to see the supercomputer

in a new light. Massively parallel processing

provides extreme-scale computational scientists

with the incredible supercomputing power that makes it possible

to solve grand challenge problems that would otherwise be impossible

to solve. With a market share

of twenty billion dollars a year, the parallel supercomputer

is used to tackle the world’s biggest challenges, such as

answering the biggest questions arising in science, engineering, medicine,

and business. From mathematics to physics

to computer science, the supercomputing paradigm

has shifted from the single-processor supercomputer

to the parallel supercomputer. My contribution to this paradigm shift was

that I was the first person to figure out

the immensely complicated procedure of dividing a real-world

grand challenge problem into 65,536 smaller problems

and figuring out how to distribute those two-raised-to-power sixteen problems

and how to map them in a one-problem to one-processor corresponded

manner that was nearest-neighbor preserving

and how to map them to as many commodity-off-the-shelf processors

that outline and define a new internet that I invented. The grand challenges

are the twenty biggest questions in computer science. Today’s grand challenge questions

are more complex than that of yesterday. The discovery of

practical parallel processing changed the way geologists search for and

recover crude oil and natural gas, and changed it from simulating

on only one processor that is not a member

of an ensemble of processors to simulating across

up to ten million processors that were tightly-coupled to each other. Similarly, parallel processing

changed the way the climate modeler predicts global warming;

and changed the ways the computational mathematician

and the supercomputer scientist compute for the answers

to their biggest questions. Parallel processing changed the way

we understand computer science and changed the way computer scientists understand

the supercomputer. Parallel processing changed the way

we find crude oil and natural gas. In the old sequential processing way,

the petroleum reservoir that is one mile deep

and the size of a town is crudely simulated on only

one isolated processor. In my new parallel processing way,

that I discovered on the Fourth of July 1989, the petroleum reservoir

is accurately simulated across millions upon millions of processors

that were tightly-coupled to each other. For the research scientist that asked what

if, parallel processing extends

the boundaries of what be discovered. For the research engineer that asked what’s

next, parallel processing extends

the boundaries of what can be solved. For the research mathematician

that asked what’s next, parallel processing extends

the boundaries of what can be achieved. Thank you. I’m Philip Emeagwali. [Wild applause and cheering for 17 seconds] Insightful and brilliant lecture [Wild applause and cheering for 17 seconds] Insightful and brilliant lecture